Export (0) Print
Expand All
Automate Testing of Your Stored Procs
Streamline Your Database Setup Process with a Custom Installer
Stop SQL Injection Attacks Before They Stop You
Get a Lean, Mean Dev Machine with the Express Editions of Visual Basic and SQL Server 2005
Keep Bad Guys at Bay with the Advanced Security Features in SQL Server 2005
AMO Lets You Dig Deeper into Your Data from Your Own Applications
Make Sense of Your Web Feedback using SQL Server 2005
Fuzzy Lookups and Groupings Provide Powerful Data Cleansing Capabilities
Unearth the New Data Mining Features of Analysis Services 2005
Using XQuery, New Large DataTypes, and More
Expand Minimize

Planning for Scalability and Performance with Reporting Services

Published: November 1, 2005

Writers: John Miller, Anne Janzer, Brian Welcker

Technical Reviewer: Tudor Trufinescu, Lukasz Pawlowksi

Project Editor: Diana Steinmetz

Applies To: Microsoft® SQL Server™ 2005 Reporting Services

Summary: This paper provides information about the scalability characteristics of different Reporting Services implementation architectures. It also offers guidelines, suggestions, and tips for running your own performance tests with Microsoft SQL Server Reporting Services.

On This Page

Introduction
Overview
Scaling Guidelines
Report Caching and Storage
Best Practices for Performance Optimization
Optimizing for Other Workloads
Running Your Own Performance Tests
Conclusion
Appendix A: System Configuration Settings
Appendix B: Performance Measurement Tools

Introduction

Microsoft® SQL Server™ Reporting Services is a reporting platform that combines the scalability and manageability of a centrally managed report server with the flexibility of Web-based and desktop-based report delivery. Reporting Services is a key component of Microsoft’s comprehensive Business Intelligence platform.

For many organizations, delivering information through reports is essential to daily business operations. As a result, reporting performance must be consistent and predictable. As reporting loads increase, organizations must be able to increase reporting capacity in a predictable and cost-effective way.  

About This Document

This document is designed to help customers and partners determine how best to plan, optimize, and scale their Reporting Services implementations for increasing loads. Following are the topics covered in this paper:

  • Performance and scalability characteristics of different hardware configurations such as scaling up and scaling out

  • Performance impact of report caching and file system storage

  • Best practices for optimizing Reporting Services performance

  • Suggestions for running your own performance tests

Although this paper is written for Microsoft SQL Server 2005 Reporting Services, most of the information provided also applies to the earlier version of the product.

Prerequisites

This white paper is not intended to be an exhaustive source of information on Reporting Services. For detailed information about the product, see the product documentation and also the resources available online at http://www.microsoft.com/sql/reporting/.

In addition to Reporting Services, this document assumes that the reader is already familiar with the following topics:

  • Microsoft SQL Server

  • Internet Information Services (IIS)

  • Microsoft .NET Framework

Information about these topics is available on the MSDN Web site at http://msdn2.microsoft.com/default.aspx.

Overview

Reporting Services is a comprehensive server-based platform for creating, managing, and delivering traditional paper-based reports and also interactive, Web-based reports. When executing reports, Reporting Services performs the following basic tasks:

  • Retrieving data to be reported

  • Processing that data according to instructions in the report definition

  • Rendering the report into specific formats

Reporting Services also performs other tasks in support of report processing, such as managing and processing subscriptions, managing and processing snapshot and cache requests, and servicing report management requests.

The workloads in Reporting Services consist of three major scenarios:

  • Interactive access to published reports by online users

  • Scheduled or event-driven report delivery to email or file shares through subscriptions

  • Ad hoc reports created and executed dynamically by online users

This white paper focuses on the first scenario, the execution of published reports by online users. This is the primary workload that most customers are interested in scaling.

Subscription delivery has the advantage of being scheduled, thus giving you greater control over when and where the processing occurs. Interactive reporting is the more difficult to plan for, because much of it depends on the size and complexity of the reports, the number of concurrent users, and the report rendering formats. Users also have high expectations for system responsiveness when accessing interactive reports.

With SQL Server 2005 Reporting Services, end users can interactively create and execute reports by using the new Report Builder tool. The additional load on the Report Server from ad hoc report creation is hard to quantify, because that depends on what users are trying to do and how effectively they do it. Ad hoc reporting scalability will be addressed in a future version of this paper.

This paper contains the general performance guidelines that Microsoft arrived at by creating interactive reporting loads and testing them on different configurations. However, to arrive at definitive performance numbers that reflect your own environment, you will need to run your own performance tests. The graphics and results provided in this paper are intended only to provide insight into the scalability characteristics of the different configurations that are possible.

Scalability vs. Reliability

System scalability is difficult to define, because it often holds different meanings for different people. Confusing the issue is the fact that scalability is often discussed in the same context as reliability. Although reliability is an important consideration that should be factored into any system configuration, its presence may or may not impact actual scalability.

In this document, scalability is defined as the ability of a system to support an increasing workload by incrementally adding system resources without having to fundamentally change the system design or architecture. Ideally, given an increase in system resources, you should see a proportional increase in the system's capability to process more workload. Although it may sound intuitive, achieving “near linear” scalability is often difficult. In practice, systems typically are not capable of achieving perfectly linear scale. This is because of the costs associated with management overhead, coordination, and communication that must take place between the application components that are deployed on disparate systems.

System reliability is based on a slightly different perspective. A reliable system is one that gracefully handles increases in workload without experiencing failure. In addition, reliable systems should not break or stop working altogether as the workload is increased. Instead, performance should degrade smoothly. However, any system will probably become unavailable when pushed hard enough. The difference with reliable systems is that they are able to recover from these events.

The key to successful capacity planning for a Reporting Services implementation is to find the balance between work overload and a system that smoothly handles workload increases, and thus create a reliable system that meets your scalability requirements.

Scale-up vs. Scale-out

The flexible design of Reporting Services enables customers to deploy its components on a single server or across multiple servers, depending on their needs and preference.   

Customers who are starting with Reporting Services often ask whether they should purchase a single large server (scale-up) or multiple smaller servers (scale-out). This paper describes the scalability characteristics of Reporting Services to help guide your decisions in answering that question.

A scale-up approach uses a large, symmetrical, multi-processor server to provide additional capacity. A benefit in this approach is that, compared to scale-out, it provides a simplified configuration and management experience. The scale-up approach is also the method used for scaling the SQL Server relational engine and Analysis Services.

Scale-out, a configuration enabled in the Enterprise Edition of Reporting Services, is the scaling approach that most customers consider. Primarily, scale-out does the following:

  • Enables customers to incrementally add or remove capacity as needed

  • Offers a very affordable, manageable, and flexible way to add and remove that capacity

  • Allows heavy workloads to be balanced across multiple commodity servers

  • Offers a degree of inherent fault tolerance

If you decide to deploy Reporting Services by using a scale-out configuration, be aware that the coordination between the multiple Report Servers occurs by having each one access a single Report Server catalog installed on a local or remote SQL Server relational database. For detailed information about Reporting Services deployment options, see the online documentation at http://www.microsoft.com/sql/reporting/ and at http://technet.microsoft.com/sql.

Reporting Services Components

To understand scalability fully, you first need to understand the Reporting Services architecture, as shown in Figure 1, and the various components.

Cc966418.pspsql01(en-us,TechNet.10).gif

Figure 1: Reporting Services Architecture

Reporting Services can be broken down logically into three tiers, as shown in Table 1.

Table 1

Component

Function

Report Server

A Web service that does the following:

  • Handles Simple Object Access Protocol (SOAP) and URL requests

  • Processes reports, including executing queries, evaluating expressions, and generating output formats

  • Provides snapshot and report cache management

  • Supports and enforces security policies and authorization

The Report Server also includes a Windows service that is responsible for scheduled and batch operations. This paper does not address the scalability of this scenario.

Report Server Catalog

The following two SQL Server databases form the catalog:

  • ReportServer contains report content information, including report definitions, report metadata, data source definitions, snapshots, and history. It stores security settings, account information, scheduling, and delivery settings.  

  • ReportServerTempDB houses content necessary to support session management and also cached data for reports.  

The catalog can reside on the same physical system as the Report Server or on a separate system (remote catalog).

Client Applications

Client applications access the server through SOAP Web services and URL requests. The Report Management tool and Report Viewer application are client applications that are included with Reporting Services. Microsoft® Visual Studio® 2005 provides Report Viewer controls for embedding reports in client systems. Report Builder is a report authoring tool used for ad hoc reports. Many third-party software vendors also provide their own client applications.

Scaling Guidelines

This section describes the basic configuration options for Reporting Services and describes how they affect performance and scalability. The goal of this section is to help you identify an effective Reporting Services configuration for your performance and load requirements, and also answer the following questions:

  • Should you consider hosting the catalog on a remote server?

  • Is it better to scale up the Report Server or add another Report Server?

  • What is the best configuration for your four-processor Report Server?

Although the tests that Microsoft performed on different configurations resulted in specific reporting workloads, your actual performance requirements will depend on a number of factors unique to your environment. These include the following:

  • Number of concurrent users

  • Size and complexity of reports generated

  • On-demand versus subscription report generation

  • Live versus cached report generation

The test results in the following sections were used to determine the relative performance and scalability characteristics of the various configurations. Note that the raw metrics, such as page views per second, will be different in your environment and scenarios. The focus is the relative improvement in the metrics as resources are distributed or added to the environment. Later sections in this paper provide guidance for creating your own performance baselines.

Local vs. Remote Configurations

Microsoft has tested two local configurations, running the Report Server and its catalog on a single server.

Cc966418.pspsql02(en-us,TechNet.10).gif

Figure 2: Local Catalog Implementation

In a local configuration, the SQL Server relational database will contend with the Report Server for available machine resources. If you have sufficient resources, there should be no problem. You might consider setting the maximum amount of memory and number of processors used by the SQL Server database engine to reduce contention with Reporting Services. For more information, see Appendix A. Customers also choose the configuration shown in Figure 2, because it requires only one SQL Server license.

In contrast, a remote catalog implementation, shown in Figure 3, spreads Reporting Services components across two physical servers. The first hosts the Report Server engine and the second remotely hosts the catalog.

Cc966418.pspsql03(en-us,TechNet.10).gif

Figure 3: Remote Catalog Implementation

The remote configuration eliminates contention for machine resources between the Report Server and the SQL Server hosting the catalog. However, you must provide adequate network bandwidth between the Report Server and the catalog server.

Scale-Up and Scale-Out

After you split the catalog to another system, you can choose to either scale up the Report Server, by adding processors, or scale out by adding machines. Figure 4 illustrates a scale-out configuration that uses multiple Report Servers to access a single catalog.

Cc966418.pspsql04(en-us,TechNet.10).gif

Figure 4: Scale-out configuration using multiple Report Servers accessing a single catalog

Scale-out configurations typically employ a remote relational database server, separate from any Report Server node, for hosting the catalog. Although it is possible to place the catalog on one of the Report Server nodes, this kind of configuration is not recommended, because the database server will compete with the Report Server for resources.

A 4-processor, scale-out configuration uses two 2-processor Report Servers accessing a remote catalog. An 8-processor, scale-out configuration uses four 2-processor Report Servers. Thus, the scale-out configuration multiplies not only processors, but also memory and network connections.

Comparing Local and Remote Catalog Performance

One of the first decisions in scalability planning is whether to host the Report Server catalog on the same system as the Report Server (local mode), or whether to host the catalog separately (remote mode.)

In the local mode, the same physical machine hosts both the Report Server and the Report Server catalog (a SQL Server instance). The catalog is independent of the source databases for report data, which typically reside on different servers.

A single machine setup is the simplest implementation and the least expensive from a licensing standpoint, but has several drawbacks. Most importantly, moving the catalog to a remote server is the first step to enabling a scale-out configuration. This is discussed later in this paper.

In an effort to answer whether it is better to add processors to a local implementation or split out the catalog, tests were run using the following system configurations:

  • 2-processor Report Server, local catalog (2-proc)

  • 2-processor Report Server, remote catalog (2-proc remote)

  • 4-processor Report Server, local catalog (4-proc local)

  • 4-processor Report Server, remote catalog (4-proc remote)

The test results displayed some interesting facts about whether to first scale up processors or to split the report catalog.

  • Using 2-processor systems, the local and remote implementations performed roughly the same for light loads.

  • 4-processor local systems provide better performance in requests per second than 2-processor local systems, but not double the performance as was in the case of the 4-processor remote system.

Table 2 shows the comparison of the four configurations at peak capacity, which was the maximum number of sessions before performance degraded to above a 30-second threshold.

Table 2

 

Avg Req / Sec

Peak sessions attained

2 Proc (Local)

Baseline

Baseline

2 Proc (Remote)

-10%

Baseline

4 Proc (Local)

53%

17%

4 Proc (Remote)

100%

117%

Switching from the 2-processor local to the 2-processor remote implementation by itself has virtually no effect on the peak number of sessions the server can support. There is a small dip in throughput, because of the overhead of moving data across the network.

At higher workloads, doubling the processors on a local catalog implementation (2-processor local to 4-processor local) doesn’t actually double the resources available. It only provides a mild increase in peak sessions attained and a 53 percent improvement in requests per second.

However, in a remote catalog installation, doubling the processors from two to four provides linear scalability. With four processors, doubling the peak number of requests more than doubles the peak sessions attained.

Key Points

  • If you are running a 2-processor local system, splitting the catalog onto another server will result in minimal changes to the overall system performance.

  • Splitting the catalog does offer benefits in areas such as management and monitoring, because the system does not have to allocate resources between the Report Server and the database process.

  • If you are running a 4-processor local system, splitting the catalog onto another server will result in significantly better performance.

  • For installations that are planning for scalability, the remote catalog implementation is the first step toward a scale-out configuration.

Scaling Up

This section looks at the capacity and performance increases available by adding processors (scaling up) in a remote catalog configuration. In this case, the scaling up is from a 2-processor remote to a 4-processor remote installation. In the following tests, the limits were reached after the response time exceeded a predefined threshold of 30 seconds, which was deemed too long for most users to tolerate.

Table 3

 

Average requests / second

Maximum # Sessions

Page Views Per Minute

2 Proc (remote)

10.71 (Baseline)

600 (Baseline)

604 (Baseline)

4 Proc (remote)

23.91 (123%)

1300 (117%)

1327 (120%)

Average Requests per Second

A 4-processor system is able to handle significantly more requests per second at high user loads than the 2-processor remote system. Doubling the number of processors available in a remote catalog implementation delivered slightly more than double the average requests per server.  

Maximum Number of Sessions

A 4-processor remote configuration was able to support more than double the maximum number of sessions of a 2-processor remote implementation.

Page Views per Minute

The Page Views per Minute metric provides insight into page production capacities. By doubling from two to four processors in a remote catalog implementation, you can achieve a 120 percent improvement in page views at heavy loads.

Key Points

  • After you have split out the report catalog to a separate system, doubling from two to four processors essentially doubles the capacity of the Reporting Services implementation without degrading responsiveness.

  • Tests were run only on servers with two and four processors. Optimizing for systems with larger numbers of processors will be examined in future tests.

Scaling Out

This section looks at the performance and capacity provided by a scale-out configuration. In Microsoft tests, the Report Servers were exact replicas of the servers in the 2-processor remote catalog implementation. Thus, the scale-out machines double and quadruple all system resources (memory, storage, and network cards) as well as processors.

When you compare the system capacity of a 2-processor remote implementation with a 4-processor and 8-processor scale-out, using the scale-out configuration provides nearly linear scalability. The following table summarizes the percentage improvement over a 2-processor, remote baseline for 4-processor and 8-processor scale-out configurations.

Table 4

 

Peak page views/hr

Maximum simul11/14/2005taneous user session

2 Proc (remote)

10.71 (Baseline)

600 (Baseline)

2 X 2 Proc (remote)

23.87 (210%)

1300 (216%)

4 X 2 Proc (remote)

45.18 (378%)

2500 (416%)

Comparing peak average requests per second and peak sessions supported, the 2 X 2-processor scale-out configuration provides better than linear scaling compared with the 2-processor remote implementation.

In comparing only the scale-out implementations, moving from a 2 X 2-processor scale-out to a 4 X 2-processor scale-out doesn’t provide true linear scalability. However, it does offer significant improvement in capacity, with an 89 percent improvement in requests per second, and a 92 percent improvement in peak sessions supported.

Key Points

  • The scale-out approach offers a highly cost-effective way to provide capacity with without a large hardware investment.

  • If you anticipate continued increase in Reporting Services demand, the scale-out is a flexible way to add additional capacity when you need it.

Comparing Scale-Up with Scale-Out

A straight scale-up versus scale-out scenario compares the capacity of the 4-processor remote catalog and the 4-processor scale-out. In one case, four processors reside in the same Report Server, while the scale-out uses two 2-processor Report Servers.

Tests show little difference between the two implementations, which were fairly evenly matched in terms of average request per second and the peak numbers of session support.

The scale-out had a total of 8 GB of memory across both nodes, compared with the 4 GB of memory in the 4-processor remote.  

The 4-processor systems were fairly evenly matched in system responsiveness, given an equal number of sessions. The 4-processor scale-out had a slight advantage in terms of responsiveness. This is to be expected for Reporting Services deployments in that ASP.Net applications are specifically tuned to take advantage of inexpensive hardware assembled in scale-out configurations.

Key Points

If you have a 2-processor remote catalog implementation and have to double capacity, it doesn’t matter if you scale up to a 4-processor Report Server or scale out with two Report Servers. Although moving to a scale-out configuration requires you move to the Enterprise Edition of Reporting Services, it has a number of advantages outside of raw capacity. These include the following:

  • Servers with fewer processors are less expensive than large SMP servers.

  • Additional machines can take advantage of dedicated memory address space, without having to move to 64-bit hardware.

  • Adding capacity with a new server instead of upgrading an existing server results in less downtime.

  • Multiple report servers offer better availability; if a single server fails, the overall system continues to answer requests.

  • You can easily expand the scale out to 6, 8, or 10 processors. SMP servers generally offer diminishing performance returns after 8 processors.

Using 64-Bit Processors

SQL Server 2005 Reporting Services supports 64-bit processors, including the Intel Itanium2 processor and also the x64 architecture from AMD and Intel. On x64 systems, Reporting Services can run in both native 64-bit mode and also the 32-bit Windows on Windows (WOW) subsystem.

In general, 64-bit systems running at the same processor speed will not increase the throughput of reports. Instead, the primary benefit is that users can view and export the output of larger reports. You might get better throughput on 64-bit machines at higher workloads, because contention for memory will be lower and garbage collection will occur less frequently. Microsoft was not able to fully test these platforms for this white paper, but plans to update this document in the future with the results.

Report Caching and Storage

One significant factor in performance and capacity of a Reporting Services implementation is whether users are generating live reports by retrieving data from source systems for each report, or by using cached or snapshot data. This section describes some of the options and also the potential performance impact of these options.  

Cached Instances

Report Server has two levels of caching:

  1. When Report Server generates a report, it retrieves the report definition from the Report Server catalog and the data for the report from the data sources. It then creates an interim report format that it stores in the session cache and writes to the ReportServerTempDB database. It uses this version, the cached instance, to create and render the final report.

    For completely “live” reports, it repeats this process for every report. However, it is possible to direct subsequent requests for a report that has already been processed to the cached instance instead. This significantly reduces the time spent to retrieve data and create the report.

  2. Report Server will also attempt to cache the output format of cached or snapshot reports in memory or in a temporary directory in the file system. If the result of a request is found in the output cache, it will bypass both the processing and rendering steps entirely, thus yielding superior performance. For more information about how to determine which type of caching was used, see the Appendix B section about performance counters.

The cache will eventually have to expire and this will cause the Report Server to gather fresh data. You can control cache expiration according to predefined intervals, schedules (either report-specific or shared), or forced expiration.

Cached reports do have an impact on storage, even though SQL Server 2005 Reporting Services stores the data compactly and also offers compression. In determining how many cached or snapshot images you might maintain, consider the following:

  • Query parameters are applied when the cached instance is created. If you need a report with different query parameters, Reporting Services will generate an additional cached instance.  

  • Report parameters not used in queries (filters) may be used to create different report versions from the same cached instance.

Report Snapshots

A snapshot refers to the same interim report format stored in a more persistent state. For example, a snapshot can be stored in the Report Server catalog instead of the ReportServerTempDB database. The catalog can also maintain multiple interim versions as a report history, thus allowing users to select between them.

SQL Server 2005 Reporting Services automatically compresses the snapshots in the catalog. It is also possible to save the snapshot images to a local file system.

The following sections highlight the performance impact of using cached instances, compressed snapshots, and file system storage.

Cached Instances

One way to improve the capacity and scale of your Reporting Services deployment is to avoid executing reports that depend on live data. You can do this by using cached data instead if it will work just as well.  

The graphic in Figure 5 displays the time spent to retrieve, process, and render a single report with 150 rows of data, running with both live and cached data.

Cc966418.pspsql05(en-us,TechNet.10).gif

Figure 5: Timing Calculations for a 150-K Row Simple Report

When aggregated, these numbers reveal that it required 261 percent more time for Reporting Services to run this report using live data versus cached. Reports that return large amounts of live data require significantly more resources than reports that rely on cached data.

It is not unreasonable to ask end users to avoid running live reports during hours of peak system use. Often, users are unaware of the impact that running such reports can have on system performance and other system users. Some “good citizen” practices you might consider conveying to your user population to improve system capacity and responsiveness include the following:

  • Avoid having reports retrieve and execute with large amounts of live data whenever possible. Use cached data instead.

  • If the situation can’t be avoided entirely, at least try to limit the number of such reports that are run, especially during peak hours.

  • When possible to report from cached data, schedule those reports to refresh the cache during off-peak hours in order to avoid the impact on other users.

Compressed Snapshot Storage

With SQL Server 2005 Reporting Services, administrators can designate where and how snapshot data is stored.

Using snapshots improves report performance significantly, but consumes database storage space. To help balance storage and performance, Reporting Services provides a compression option that is controlled by using system properties. The default setting is to compress snapshot data and report definitions. You can also turn off compression.

For example, in a 20-thousand row report, SQL snapshot compression reduces the snapshot size to 20 percent of its uncompressed size. The reduced footprint not only translates into significant storage savings in the Report Server catalog, but it also greatly reduces the network traffic between the Report Server and the catalog.

File System Storage

Another option for snapshot storage is to use file system (FS) storage. This setting is not affected by the SQL compression setting, because the data is stored in the file system.

FS snapshot storage is well suited for remote catalog and scale-out deployments of Reporting Services. When FS storage is enabled, snapshot data is persisted to the local file system of the Report Server. This enables the Report Server to avoid making round trips back to the catalog to support sessions and report requests.  

You can control FS usage by changing the WebServiceUseFileShareStorage key value in the RSReportServer.config file. To turn FS on, change the value as shown in the following:

<Add Key="WebServiceUseFileShareStorage" Value="true" />

To turn FS off, simply set the key value back to "false". 

Note that by default, FS stores snapshot data to the file system in a directory named C:\Program Files\Microsoft SQL Server\MSSQL\MSSQL.instance\Reporting Services\RSTempFiles. CleanupCycleMinutes, located in the RSReportServer.config file controls how frequently unused FS data gets removed. Care should be taken to accurately size storage requirements of this folder on systems that have significant activity to avoid exceeding available storage limits.

If the Report Server uses a local catalog instead of a remote catalog, it doesn’t make sense to use file system storage. The snapshot data is already stored locally within the SQL Server relational database, which is highly optimized for performing data storage and retrieval. Configuring a local catalog system to use file system storage would introduce unnecessary redundancy in storage and snapshot management.  

Impact of Compression and File System Storage

Microsoft has tested the performance impact of compression and FS snapshot storage on a 4-processor remote system. Both techniques can increase the capacity of the system at heavy loads. Although compression increases the amount of work on the processor, it also reduces the network traffic between the report server and catalog.

Table 5

 

Avg Req / Sec % of baseline

Page Views/Hr % of baseline

Peak sessions attained % of baseline

No compression

Baseline

Baseline

Baseline

SQL compression

Baseline

109%

Baseline

File System storage

126%

129%

113%

File system storage lets you support higher numbers of user sessions without degrading response time. It is obvious that the FS setting provides the best scale and capacity. In testing a 4-processor remote catalog system, Microsoft was able to support 13 percent more simultaneous user sessions by turning on the File System storage of snapshot data.

Key Points

  • You can virtually add capacity to existing systems by using more cached instances and snapshots in place of live reports.

  • For high volume reporting environments, using file system snapshots provides the highest possible levels of capacity and response times in scale-out and remote server implementations.

Best Practices for Performance Optimization

This section summarizes guidelines and best practices for capacity planning and performance optimization.

Choosing a Configuration

In reviewing the scaling up and scaling out guidelines, it becomes clear that there is a spectrum of system configurations in terms of the reporting capacity they are able to deliver.

At the lowest end of the spectrum is the 2-processor Report Server with a local catalog. At the highest end is a scale-out, which provides highly scalable capacity and performance. Although the tests in this paper covered only scale-out configurations with four 2-processor Report Servers and a remote catalog, the scale-out can easily be larger.

Between these two extremes you can host the catalog remotely, add processors to either a local or remote implementation, and split processing among the processors in a single system in a “web garden” approach.

In determining the right configuration for your environment, you first need to identify your specific performance requirements. Following are some simple guidelines you can use:

  • If your system can accommodate more memory, add it. This is because consumption is the first bottleneck you are likely to encounter.

  • If you are concerned about scalability and running with a local catalog, the first thing to do is host the report catalog on a separate server. Only by splitting out the catalog can you then gain significant scalability in performance and load by adding processors.

  • Both scaling up and scaling out are good approaches for up to four processors. Beyond that number, scale-out is probably the safest and most flexible approach. Large SMP systems have not yet been tested.

Scaling out in a scale-out configuration lets you keep certain processes, such as ad hoc report design and scheduled report operations, from interfering with interactive reporting. For example:

  • Dedicate a specific Report Server as the front end for all Report Builder requests, leaving the other Report Servers available to handle interactive reporting.

  • Dedicate one or more Report Servers to regular subscription or event-driven report generation.

General Performance Optimization Tips

Because the Report Server catalog is hosted on SQL Server, advice for optimizing performance of a SQL Server database server might apply if you suspect a database performance problem. This can involve either your source databases from which you pull data to build reports, or the catalog database used to support all reporting services implementations. You can find detailed information about optimizing the SQL Server at http://www.microsoft.com/technet/ and in “The Data Tier: An Approach to Database Optimization” and the Performance Evaluation section of the SQL Server Books Online.

Following are some general performance optimization techniques to keep in mind when working with Reporting Services:

  • Optimize your report queries. Usually, the bulk of report execution time is spent executing queries and retrieving results. If you are using SQL Server, tools such as Query Analyzer and Profiler can help you optimize queries. Also, the Database Tuning Advisor can suggest better indexes for your database. You should also consider using Analysis Services to increase performance.

  • If you don’t require data in your report, don’t retrieve it. Using database operations such as filtering, grouping, and aggregates can reduce the amount of data that is processed in the report, and thus improve performance.

  • Keep your reports modest in size and complexity. Few users really want to look at a 1,000-page report. If you find that you need to process large reports, look at ways to break them into smaller chunks.

  • If performance is poor even for single users, check the Application Restarts counter in the ASP.NET category. Some antivirus software is known to “touch” configuration files, thus causing expansive Application Domain restarts in the Report Server Web service. For more information, search http://support.microsoft.com/ for articles about antivirus and ASP.NET.

  • If performance is slow on the first Web service access after a period of no activity, disable the idle time-out on the Performance tab in the Application in IIS Manager.

  • Execute reports from cached/snapshot data as opposed to live data, whenever possible.

  • Limit non-essential background processing to off-peak hours in order to avoid contention with online users.

  • If you load your report server with 4-GB memory, remember to set the /3GB switch in C:\boot.ini so that application processes can use it.  

  • Housekeeping tasks such as cleaning up sessions in Reporting Services can be expensive if performed while users are active on the system. Instructions about how to adjust the CleanupCycleMinutes interval in your RSReportServer.config file are discussed in Appendix A.

  • Create the Report Server catalog on several data files. You can do this by putting the data and the log files on different physical disks. This is described in detail in Appendix A.

Using Web Gardens

On the Microsoft® Windows® 2003 Server, you can increase scalability by configuring the Maximum Number of Worker Processes setting.

To locate this Internet Information Server (IIS) setting, open the IIS Manager tool and view the properties for the application pool assigned to each of the Reporting Services virtual directories (vdir). By default, these are named Reports and ReportServer. You must first determine what application pool is assigned to each vdir by right-clicking on it and selecting the properties option. The assigned Application Pool will appear at the bottom of the Virtual Directory tab.   

After you determine which Application Pool is in use, the next step is to open the properties of the assigned Application Pool. This is also accessible from within the IIS Manager. Next, locate the setting for Maximum Number of Worker Processes. Do this by right-clicking the assigned Application Pool, choosing the properties option, and viewing the Performance tab. The setting for Maximum Number of Worker Processes appears at the bottom of the Performance tab.  

On 2-processor and 4-processor systems, increasing the maximum number of worker processes has a positive impact on capacity and stability. Specifically, changing the maximum number of worker processors from one to four on a 4-processor system enables the Report Processor to handle larger concurrent session loads before performance starts to degrade. Changing the setting has less impact on the 2-processor remote installation than the 4-processor remote.

If you are using Reporting Services on a system that has more than four processors, 32-bit addressability may not be able to supply enough memory to service more than four processes. You should perform additional testing to confirm the results.

Optimizing for Other Workloads

Although this white paper focuses primarily on interactive workloads, the following sections provide some tips for scheduled and ad hoc reporting workloads. Additional guidance for these scenarios will also be provided in future white papers.

Scheduled Report Delivery Performance

Scheduled or subscription reports have the advantage of taking place on a schedule that is within your control. If your Reporting Services environment provides both scheduled and on-demand reports, you can isolate the report subscription processing on an individual Report Server. You can then access the same catalog as other Report Servers, but not manage interactive requests. In this way subscription report processing cannot detract from on-demand report processing.

Similar to interactive workloads, scalability of scheduled operations can be increased with a scale-out configuration. All background operations are put into a queue in the catalog database and each server that is connected can pull out a job to process.

Ad Hoc Reporting Performance

SQL Server 2005 Reporting Services offers an ad hoc reporting tool that lets business users create their own reports interactively. Report Builder includes a business query model that helps users construct reports without a deep understanding of the underlying data sources.

In using Report Builder, there is a risk that business users can consume significant resources in constructing and running complex, resource-intensive reports that affect not only the Report Server but also source systems.

If you are anticipating widespread use of Report Builder, you can take some steps to protect overall system performance. For example:

  • Isolate Report Builder sessions to a dedicated Report Server so that the unpredictable load cannot affect interactive report processing for other users.

  • Don’t let Report Builder sessions work directly with production OLTP systems. Instead, direct them to copies of your production data sources. Depending on the desired latency, you can populate these by using techniques such as database mirroring and replication, or create a data mart by using Integration Services. In this way, you avoid the danger of having unmanaged report building affect production systems.

Running Your Own Performance Tests

Although it is not a trivial task to simulate actual real-world workload conditions, spending the time to do so is strongly recommended. Many deployments that experience performance-related problems in production were never adequately stress-tested prior to implementation. Planning properly for capacity and adequate sizing of the servers that are necessary to support your production workload is the best way to avoid major headaches before you go live. At a minimum, the following two tasks should be part of your deployment plan:

  • Simulate a representative workload that your Reporting Services implementation site is likely to experience after it is in production.

  • Conduct tests against an environment that is virtually identical to your planned production environment.

By performing these two tasks and recording the metrics discussed in the section, Metrics for Analysis, you will be able to establish a realistic performance baseline in planning your deployment. Such a baseline will be useful any time you have to validate future application, configuration, or hardware changes. In using an established baseline, you can verify that proposed changes will not adversely impact performance by simply rerunning the same tests. Any time you make major changes to your system, you should re-establish a performance baseline.

There are numerous stress tools available from Microsoft and third parties that can be used for establishing a performance baseline. Microsoft® Visual Studio® 2005 Team Test Edition includes tools for load testing on Web sites. For more information, go to http://msdn2.microsoft.com/default.aspx.
The appendices in this paper offer a number of performance counters to help you construct your tests.

Performance Variables

In determining your performance requirements, you must first understand your workload needs. Some customers use Reporting Services to deliver reports on demand through a corporate portal while others rely on scheduled report processing and delivery. Some also support ad hoc report creation using Report Builder.

Other factors affecting workload and performance include the following:

  • Number of users who are simultaneously making report requests

    The more sessions that are active simultaneously, the more machine resources consumed.

  • Size and complexity of report definitions

    Complex reports that process large numbers of rows require significantly more resources than a simple report that processes a few rows.

  • Whether reports are executed from cached or snapshot data or from live data

    Reports that are executed by using cached or snapshot data can consume significantly fewer resources than reports that are run with live data. In addition, using cached reports reduces the load placed on the source system from which the data is queried.  

  • Performance of the source data systems from which Reporting Services obtains the report data

    If queries executed against these systems are slow, your reports will appear to run slowly.

  • Format requested when rendering a report

    Formats such as PDF or Microsoft® Excel® are more resource-intensive than HTML.

  • Performance of the database server that hosts the catalog

    As with source data systems, if the system hosting the catalog is slow, your reports will appear to run slowly.

A number of things that are subject to design or that are user-configurable also affect report performance. These include the following:

  • Application settings as configured in the Reporting Services configuration files, IIS, and the Microsoft® Windows operating system.

  • Configuration and design of the hardware that supports the reporting application.

  • External factors such as your delivery infrastructure, including network capacity, performance of your email server for subscription delivery, or even the availability and performance of file shares.

To properly plan capacity for any Reporting Services deployment, you first need to understand your specific workload characteristics. Your workload may vary over time, with monthly, weekly, or daily fluctuations in usage, based on a number of business factors.   

Metrics for Analysis

As you determine a performance baseline for your system, you should determine meaningful metrics. Microsoft used the following metrics in its scalability tests:

  • Simultaneous Session

  • Average Requests per Second

  • Median Response Time

  • Total Page Views per Minute

The Simultaneous Session metric is a good variable for determining the other metrics. For example, you can start with 100 simultaneous sessions and grow the population by a rate of 100 on each successive run to generate a test workload.  

The Average Requests per Second metric helps you understand how many simultaneous Web requests could successfully be serviced by each of the given system configurations. As long as the system is not struggling to service a particular workload, each system test should be able to equally service all the requests made.

As you graph the requests per second against simultaneous sessions, look for the apex of the curve that represents the maximum number of Average Requests per Second that the system was able to service. This provides insight into the peak number of requests per second and maximum number of sessions that each system is capable of servicing. Obviously, Average Requests per Second can easily be extrapolated if you prefer to see this as Average Requests per Minute or Average Requests per Hour.

Median Response Time (time to last byte or TTLB) is a common metric that is used to understand how long it takes for Reporting Services to respond to a report request. Obviously, lower response times are better.  

Total Page Views per Minute provides insight into the number of report executions and subsequent page views that each system configuration is capable of servicing. You can actually obtain more detailed information about page views. This includes the following:

  • Initial Report Request

    To obtain this information, simulate a user making an initial request to view the first page of a report. Initial requests typically take longer to process than subsequent page view requests.

  • Subsequent Page Views

    To obtain this information, simulate a user making a request to view a subsequent page of the report.

  • Total Page Views

    This is the sum of all Initial Report Requests, plus all Subsequent Page Views, and provides a combined value that represents the overall number of page view requests performed.

Think Time

In real life, users typically wait while they view the results of a report before they make subsequent requests to view other pages or run additional reports.

In internal tests, Microsoft chose a random think time ranging between 25 and 35 seconds. This think time seems realistic, assuming that the typical user executing a report spends a certain amount of time viewing the results of that report before accessing a subsequent page.

Named User Population vs. Simultaneous Active Sessions

A named user population is not equivalent to the number of simultaneous sessions actively making requests against Reporting Services. While planning your own deployment, keep in mind the following:

  • The named user population represents the sum of all users who have authorization and potential to access your Report Servers.

  • Only a fraction of the named user population represents simultaneous sessions that are working interactively on the system at any given time.  

  • Because of think time, only a fraction of these are simultaneously active sessions that are concurrently issuing requests against the Report Server at a given time.

Conclusion

Microsoft SQL Server Reporting Services was designed to meet the needs of a wide range of organizations for cost-effective, flexible reporting to optimize business productivity.

Built on the .NET platform and the Windows Server system, Reporting Services offers the scalability and reliability to support demanding enterprise environments. Its modular extensible architecture, combined with open interfaces and APIs, supports integration into nearly every IT environment. In this way, it effectively connects people with the information they need throughout the enterprise.

In most situations, adding machine resources delivers near-linear increases in the scalability of a Reporting Services deployment. The first logical step toward increasing system capacity is to separate the Report Server catalog onto a remote server. After this, you must decide whether it is better to pursue a scale-up or scale-out approach to increasing the capacity of Reporting Services. Microsoft testing demonstrates that both approaches are effective. Therefore, your choice will most likely depend on economics and also your own comfort level and preferences.

Regardless of the scaling approach you eventually select, it is important to establish your own custom baseline. This is so you can monitor whether the system’s capacity improves or worsens when you make configuration changes. Only by measuring with a workload that reflects your specific user requirements will you be able to determine whether changes have a positive or negative impact on the system.

Microsoft provides several tools, as described in this white paper, to help you measure and monitor the performance of Reporting Services. Using the guidance provided in this paper, you should be able to successfully perform basic capacity planning for Reporting Services deployments and monitor ongoing operations.

Appendix A: System Configuration Settings

When configuring your Report Server system, you should look at the following settings and options to optimize performance. They are particularly important when creating baseline performance measurements.

CleanupCycleMinutes

The CleanupCycleMinutes field controls how often Reporting Services performs certain housekeeping tasks such as cleaning expired and orphaned sessions. These tasks can be expensive to perform during busy times. If you have the available disk space to store session data for longer periods, increasing the value of CleanupCycleMinutes reduces how often housekeeping is performed. You will also want to increase the time to prevent it from running when you run baseline performance tests.  

The CleanupCycleMinutes field resides in the RSReportServer.config file. By default, this file is located in C:\Program Files\Microsoft SQL Server\MSSQL\MSSQL.Instance\Reporting Services\ReportServer. The default setting is 10. However, Microsoft has changed this to 1200 for its scalability tests to prevent those processes from starting in the middle of the test runs.

MaxActiveReqForOneUser

By default, Reporting Services limits the number of outstanding URL and Web service requests that any single user run simultaneously has in order to guard against denial of service (DOS) attacks. As administrator, you can increase this limit so that more requests can be open simultaneously for each user. When this upper limit is reached, the Report Server automatically drops subsequent requests from that user.  

This value is also found in the RSReportServer.config file. The default setting is 20. However, you may have to adjust this setting upward, depending on how many unique users you have in your testing scenario.

Setting memory configuration limits on SQL Server when used on the same machine as Report Server

You can partition the memory consumed by SQL Server and the memory consumed by Report Server by “pinning” SQL Server to a specific amount of memory. To do this, set both the minimum and the maximum amount of memory to a fraction of the physical memory of the machine. For example, on a machine with 3 GB available to all Windows applications, you might pin SQL Server at 1 GB and leave the rest to Reporting Services.

ReportServerTempDB Partitioning

For scalability tests, Microsoft created a partitioned ReportServerTempDB database comprised of 10 separate 1 GB files and a 1-GB transaction log in an effort to improve the parallelism for disk I/O operations. If snapshots in the workload had been used for testing, partitioning the ReportServer database would also have been beneficial. All 11 files were allocated across a single RAID 0+1 storage device to optimize write activity and also read activity to the catalog. The following is a sample T-SQL script that demonstrates the general idea of how to do this.

use master  
drop database ReportServerTempDB 
go 
 
create database ReportServerTempDB  
ON 
PRIMARY  
( NAME = ReportServerTempDB1, 
   FILENAME = 'C:\Program Files\Microsoft SQL  
Server\MSSQL\MSSQL.Instance\data\ReportServerTempDB1.mdf', 
   SIZE = 1GB, 
   FILEGROWTH = 20), 
( NAME = ReportServerTempDB2, 
   FILENAME = 'C:\Program Files\Microsoft SQL  
Server\MSSQL\data\ReportServerTempDB2.ndf', 
   SIZE = 1GB, 
   FILEGROWTH = 20), 
. . . 
( NAME = ReportServerTempDB10, 
   FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL\  
MSSQL.Instance\data\ReportServerTempDB10.ndf', 
   SIZE = 1GB, 
   FILEGROWTH = 20) 
LOG ON 
( NAME = ReportServerTempDBLog, 
   FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL\  
MSSQL.Instance\data\ReportServerTempDB_Log.ldf', 
   SIZE = 1GB, 
   FILEGROWTH = 20) 
COLLATE Latin1_General_CI_AS_KS_WS 
go 
 
use ReportServerTempDB 
exec sp_addrole 'RSExecRole' 
go

Appendix B: Performance Measurement Tools

This section describes various tools available to measure and monitor Reporting Services performance.  

Windows Performance Monitor Tool

Performance counters for monitoring Reporting Services can be monitored by using the integrated performance tool. This tool is part of all Microsoft® Windows® operating systems on which Reporting Services runs. Monitoring specific performance counters within your Reporting Services implementation enables you to do the following:

  • Estimate system requirements needed to support an anticipated workload.

  • Create a performance baseline to measure impact of configuration changes and application upgrades.

  • Monitor application performance under load, whether real or artificially generated.

  • Verify that hardware upgrades have the desired impact on performance.

  • Validate that changes made to your system configuration have the desired impact on performance.

If you need to monitor multiple Report Server instances from a single machine, you can monitor those instances together or separately. Choosing which instances to include is part of adding a counter. For additional information about using the performance tool included with Windows, see the Microsoft Windows product documentation.

To access the performance tool

  • From the Start menu, click Run.

  • Enter perfmon in the Open text box and then click OK.

  • In the Performance tool, select the System Monitor object in the left-hand pane and then right-click on the Performance chart.

  • Select Add Counters.

You should now be in a position to start selecting those objects and counters that you want to monitor

ASP.NET Application Performance Counters

Most of the information available about ASP.NET application performance counters was recently consolidated into a single comprehensive document titled “Improving .NET Application Performance and Scalability.” The following table describes some important counters to use in monitoring and optimizing performance of ASP.NET applications, including Reporting Services.

Performance Object

Counter

Instance

Description

Processor

% Processor Time

__Total

The % Processor Time monitors CPU utilization on the Web server computer. Low CPU utilization or the inability to maximize CPU utilization, regardless of client load, can signify contention for a lock or a resource in your Web application.

Process

% Processor Time

aspnet_wp or w3wp, depending on the version of IIS

Percent of processor time consumed by the ASP.NET worker process. When you compare performance under standard load against a previously captured baseline, decreases in this counter reveal lower processor requirements and, therefore, improved scalability.

Process

Working Set

aspnet_wp or w3wp, depending on the version of IIS

Quantity of memory that is actively used by ASP.NET. Although the application developer has the most control over how memory is consumed by an application, system administrators can have a significant impact by adjusting the session time-out period.

Process

Private Bytes

aspnet_wp or w3wp, depending on the version of IIS

Private Bytes is the current size, in bytes, of memory that this process has allocated and which cannot be shared with other processes. Unusual spikes likely indicate a bottleneck somewhere that is causing the worker process to hold onto memory longer than desired. Sudden drops to nearly zero in this counter likely indicate an ASP.NET application restart, because of an unforeseen problem. To verify, monitor ASP.NET Application Restarts.

ASP.NET Applications

Requests/ Sec

__Total

Permits you to verify that requests are handled as quickly as they are sent. If the number of requests per second falls below the number of requests generated per second, queuing is occurring. This generally means that the maximum request rate has been surpassed.

ASP.NET Applications

Errors Total

__Total

Total number of errors that occur during the execution of HTTP requests. Includes any parser, compilation, or run-time errors. This counter is the sum of the Errors During Compilation, Errors During Preprocessing, and Errors During Execution counters. A well-functioning Web server should not generate many errors. If errors occur in your ASP.NET Web application, their presence may skew your actual throughput results.

ASP.NET

Request Execution Time

 

Shows the time, in milliseconds, that it takes to render the last requested page and then transmit it to the user. This counter will generally be greater than the time a trace shows to render the page. This counter is a more comprehensive measurement of start-to-finish request time. If this counter shows a lower average when compared to your established baseline, both scalability and performance of the application has been increased.

ASP.NET

Application Restarts

 

Number of times that an application has been restarted during the Web server's lifetime. Application restarts are incremented with each Application_OnEnd event. An application restart can occur because of changes to the Web.config file, changes to assemblies stored in the application's \bin directory, or too many changes in Web Forms pages. Unexpected increases in this counter can mean that unforeseen problems are causing your Web application to shut down. In such circumstances, you should investigate the cause.

ASP.NET

Requests Queued

 

Number of requests in the queue waiting for service. When this number starts to increment linearly with respect to client load, the Web server computer has reached the limit of concurrent requests it can process. The default maximum for this counter is 5,000. You can change this setting in the computer's Machine.config file.

ASP.NET

Worker Process Restarts

 

Number of times a worker process has been restarted on the server computer. A worker process can be restarted if it fails unexpectedly or when it is intentionally recycled. In the case of unexpected increases in this counter, you should investigate the cause.

In addition to the core monitoring set named in the preceding table, the performance counters in the following table provide added value when you are trying to diagnose specific performance problems with ASP.NET applications.

Performance Object

Counter

Instance

Description

ASP.NET Applications

Pipeline Instance Count

__Total

Number of active request pipeline instances for the specified ASP.NET application. Because only one execution thread can run within a pipeline instance, this number gives the maximum number of concurrent requests that are being processed for a particular application. In most circumstances, it is better for this number to be low when under load, signifying that the processors are being well utilized.

.NET CLR Exceptions

# of Exceps Thrown

 

Displays the number of exceptions thrown in an application. Unanticipated increases in this number may signify possible performance problems. The mere presence of exceptions may not necessarily be a cause for concern, because some code paths rely on exceptions as part of the normal functioning. For example, the HttpResponse.Redirect method does its work by throwing an uncatchable exception, ThreadAbortException. As such, it is more useful to track this against ASP.NET applications. Use the Errors Total counter to determine if the exception is generating an unexpected error from the application.

System

Context Switches/ sec

 

Measures the rate at which thread contexts are switched by all processors on the Web server computer. A high number for this counter may indicate either high contention in locks or many switches between user and kernel mode by the thread. Further investigation with sampling profilers and other tools may be warranted.

Reporting Services Performance Counters

Reporting Services includes a set of its own performance counters for collecting information about report processing and resource consumption. Two objects appear in the Windows performance monitor tool to enable you to monitor the status and activity of instances and components: the MSRS 2005 Web Service and the MSRS 2005 Windows Service objects.

The MSRS 2005 Web Service performance object includes a collection of counters used to track Report Server processing typically initiated through on-line, interactive report viewing operations. These counters are reset whenever ASP.NET stops the Web service. The following table provides a list of the counters that are available for monitoring Report Server performance, and also a description of their purpose.

Performance Object: RS Web Service

Counter

Description

Active Sessions

Number of active sessions. This counter provides a count of all browser sessions that are not yet expired. This is not the number of requests that are served simultaneously, but the number of sessions stored in the ReportServerTempDB database.

Cache Hits/Sec

Number of reports requests per second that were retrieved from the catalog.  When this is incremented and Memory Cache Hits is not, it means that the report data has not been reprocessed, but the page has been re-rendered. Use this counter in conjunction with Memory Cache Hits/Sec to determine whether resources used for caching, disk or memory, are sufficient.

Cache Misses/Sec

Number of requests per second that failed to return a report from the catalog, as opposed to in-memory. Use this counter in conjunction with Memory Cache Misses/Sec to determine whether the resources used for caching, disk or memory, are sufficient.

First Session Requests/Sec

Number of new user sessions that are started from the Report Server cache each second.

Memory Cache Hits/Sec

Number of times per second that reports were retrieved from the in-memory cache. In-memory cache is a part of the Reporting Services cache that stores rendered reports in memory or in temporary files. This will provide the best performance for requests, because there is no processing required. When in-memory cache is used, the report server does not query SQL Server for cached content.

Memory Cache Misses/Sec

Number of times per second that reports could not be retrieved from the in-memory cache.

Next Session Requests/Sec

Number of requests per second for reports that are open in an existing session.

Report Requests

Number of reports that are currently active and being handled by the Report Server.

Reports Executed/Sec

Number of successful report executions per second. This counter provides statistics on report volume. Use this counter with Request/Sec to compare execution to report requests that can be returned from cache.

Requests/Sec

Number of requests per second made to the Report Server. This counter tracks all types of requests that are handled by the Report Server.

Total Cache Hits

Total number of requests for reports from cache since the service started. This counter is reset whenever ASP.NET stops the Web service.

Total Cache Misses

Total number of times that a report could not be returned from cache since the service started. This counter is reset whenever ASP.NET stops the Web service. Use this counter to determine whether disk space and memory are sufficient.

Total Memory Cache Hits

Total number of cached reports that were returned from the in-memory cache since the service started. This counter is reset whenever ASP.NET stops the Web service. In-memory cache is a part of the cache that stores reports in CPU memory. When in-memory cache is used, the Report Server does not query SQL Server for cached content.

Total Memory Cache Misses

Total number of cache misses against the in-memory cache since the service started. This counter is reset whenever ASP.NET stops the Web service.

Total Processing Failures

Total number of report processing failures that have occurred since the service started. This counter is reset whenever ASP.NET stops the Web service. Processing failures can originate from the report processor or any extension.

Total Reports Executed

Total number of reports that executed successfully since the service started.

Total Requests

Total number of all requests made to the Report Server since the service started.

The RS Windows Service performance object includes a collection of counters used to track report processing that is initiated through scheduled operations. Scheduled operations can include subscription and delivery, report execution snapshots, and report history. While Microsoft workloads did not contain any scheduling or delivery operations, the performance counters are listed here for convenience.

This performance object is used to monitor the Report Server Windows service. If you are running a Report Server in a scale-out, the counts apply to the selected server, not to the scale-out as a whole. These counters are reset when the application domain recycles. The following table provides a list of the counters available for monitoring Scheduling and Delivery, and also a description of their purpose.

Performance Object: RS Windows Service

Counter

Description

Cache Flushes/Sec

Number of cache flushes per second.

Cache Hits/Sec

Number of requests per second for cached reports.

Cache Misses/Sec

Number of requests per second that failed to return a report from cache.

Delivers/Sec

Number of report deliveries per second from any delivery extension.

Events/Sec

Number of events processed per second. Events that are monitored include SnapshotUpdated and TimedSubscription.

Memory Cache Hits/Sec

Number of times per second that reports were retrieved from the in-memory cache.

Memory Cache Misses/Sec

Number of times per second that reports could not be retrieved from the in-memory cache.

Report Requests

Number of reports that are currently active and being handled by the Report Server. Use this counter to evaluate caching strategy. Requests are made to specific rendering extensions. There may be significantly more requests than report executions.

Reports Executed/Sec

Number of successful report executions per second.

Snapshot Updates/Sec

Number of scheduled updates per second for report execution snapshots.

Total App Domain Recycles

Total number of application domain recycles since the service started.

Total Cache Flushes

Total number of Report Server cache updates since the service started.

Total Cache Hits

Total number of requests for reports from cache since the service started.

Total Cache Misses

Total number of times that a report could not be returned from cache since the service started.

Use this counter to determine whether you need more disk space or memory.

Total Deliveries

Total number of reports delivered by the Scheduling and Delivery Processor, for all delivery extensions.

Total Events

Total number of events since the service started.

Total Memory Cache Hits

Total number of cached reports that were returned from the in-memory cache since the service started.

Total Memory Cache Misses

Total number of cache misses against the in-memory cache since the service started.

Total Processing Failures

Total number of report processing failures that have occurred since the service started. Processing failures can originate from the report processor or any extension.

Total Rejected Threads

Total number of data processing threads rejected for asynchronous processing and subsequently handled as a synchronous process in the same thread.

Total Report Executions

Total number of reports executed.

Total Requests

Total number of reports that executed successfully since the service started.

Total Snapshot Updates

Total number of report execution snapshots updated since the service started.

If you are having troubleshooting performance issues with Reporting Services, it is generally helpful to record the following performance counters: ASP.NET, ASP.NET Applications, Process, System, Memory, Physical Disks, .NET Exceptions, .NET Memory, .NET Loading, .NET CLR Locks and Threads, and .NET CLR Data.

Optional Reporting Services Performance Counters  

Following is a list of performance counters that apply to the RS Web Service, but that are not installed by default. However, these can be revealed to improve your insight while conducting performance optimization. To do this, execute the following statement from the command prompt:

installutil.exe /u ReportingServicesLibrary.dll

followed by:

installutil.exe ReportingServicesLibrary.dll

To successfully execute this statement, you may first have to modify your path to include the directory location where the Microsoft .NET Framework version is installed. After that is in your path, execute the previous statements from the directory that contains the file ReportingServicesLibrary.dll. By default, this is installed in C:\Program Files\Microsoft SQL Server\MSSQL\MSSQL.instance\Reporting Services\ReportServer\bin. These counters have not been fully localized.

Active Database Connections

Number of active database connections open at one time. Counts only connections to the Report Server catalog.

Active Datasource Connections

Number of active database connections open at one time. Counts only data source connections opened by the reports that are currently running.

Active Threads

Number of threads currently active. In the Web service, it includes threads involved in servicing requests. In the delivery service, it includes worked threads and also maintenance and polling threads.

Byte count

For the last request, number of bytes returned to the client when rendering the current report. This is similar to the corresponding execution log entry.

Row Count

For the last request, number of rows returned by the current report. This is similar to the corresponding execution log entry.

Time in Compression

For the last request, number of milliseconds spent compressing snapshots and PDF reports.

Time in data source access

For the last request, number of milliseconds spent accessing data source information for reports. Includes executing the queries and fetching the results. Similar to the corresponding execution log entry.

Time in database

For the last request, number of milliseconds spent in accessing Report Server catalog information.

Time in processing

For the last request, number of milliseconds spent in report processing. This is similar to the corresponding execution log entry.

Time in rendering

For the last request, number of milliseconds spent in report rendering. This is similar to the corresponding execution log entry.

Reporting Services Execution Log

The Reporting Services execution log is an additional source of information for monitoring Reporting Services performance.

This log contains information about the reports that execute on the server or across multiple servers in a scale-out. You can use the report execution log to find out how often a report is requested, what formats are used the most, and what percent of processing time is spent on each processing phase.  

Unlike performance monitor objects, which may be reset when the application domain recycles or whenever ASP.NET stops the Web service, the execution log values remain intact unless you specifically reset them. Another advantage to using the execution log is that it reveals, in granular detail, how much time each report spent retrieving data, processing the report, and rendering it into the requested format. This granular level of detail can be invaluable in identifying and debugging performance bottlenecks.

The report execution log captures the following data:

  • Name of the report server instance that handled the request

  • Report identifier

  • User identifier

  • Request type, either user or system

  • Rendering format

  • Parameter values used for a report execution

  • Start and stop times that indicate the duration of a report process

  • Percent of time spent retrieving the data, processing the report, and rendering the report

  • Source of the report execution (1=Live, 2=Cache, 3=Snapshot, 4=History)

  • Status, either rsSuccess or an error code, with only the first error recorded if multiple errors occur

  • Size of rendered reports in bytes

  • Number of rows returned from queries

The Report Server logs data about report execution to a table in the catalog. This table does not provide complete information by itself, nor does it present data in a format that is readily understandable to users.

To view report execution data, you should first run the Integration Services package that comes with the Reporting Services samples. Then, extract the data from the execution log to put it into a table structure that is easy to query against. This is the preferred, recommended approach, because underlying table structures inside the catalog may change with future versions of the product. For detailed instructions about how to set up, configure, and access data from the Report Server execution log, search “Querying and Reporting on Report Execution Log Data” in SQL Server Books Online.

For more information

http://www.microsoft.com/technet/prodtechnol/sql/default.mspx

Download

Download Planning for Scalability and Performance with Reporting Services
462 KB
Microsoft Word file

Show:
© 2014 Microsoft