Results for the Internet Explorer Browsing Performance Assessment

Applies To: Windows 8.1

The Internet Explorer® Browsing Performance assessment measures the quality of the browsing experience in Internet Explorer. It also evaluates the capabilities of the CPU and graphics hardware. The assessment provides three separate browsing workloads to stress the computer in various ways. These workloads are representative of the Fish Bowl, Blizzard and Speed Reading sites on the Internet Explorer Test Drive site. The Internet Explorer Browsing Performance assessment uses these three workloads to measure the browsing experience, and record the CPU and disk activities as metrics. The assessment then analyzes the trace file to identify common performance issues.

This topic can help you interpret the metrics produced by the Internet Explorer Browsing Performance assessment. It also provides guidance on how to use the results to identify and resolve common issues that negatively affect your experience with Internet Explorer.

In this topic:

  1. Goals File

  2. Metrics

  3. Issues

For more information about the system requirements and assessment settings, see Internet Explorer Browsing Performance.

Goals File

Some assessments have goals for the metrics that are captured and displayed in the Results View. The metric is usually the measure of an activity. When the metric value is compared to the goal for that metric, the status is color coded in the Result View as follows:

  • Green means that the system has a great user experience and that there are no perceived problems.

  • Yellow means that the user experience is tolerable and you can optimize the system. Review the recommendations and analysis to see what improvements can be made to the system. These can be software changes, configuration changes or hardware changes.

  • Red means that the system has a poor user experience and that there is significant room for improvements. Review the recommendations and analysis to see the improvements that can be made to the system. These can be software changes, configuration changes or hardware changes. You might have to consider making tradeoffs to deliver a high quality Windows experience.

  • No color means that there are no goals defined for the metric.

Goals are an invaluable triage tool that helps you understand how the system is performing. A default set of goals is provided when you install the assessments. Unlike the Windows Hardware Certification tests which provide pass/fail results, the assessment goals are only recommendations.

The default goals are defined for primary metrics which measure user experiences. These goals directly correlate to perceivable quality indicators. We recommend that you use the default goals file. However, you can also define your own goals. For example, goals for a basic laptop might be different than the goals you set for a high end desktop computer, or market expectations might change in such a way that you want the flexibility to define different goals and key requirements as time passes and technology improves.

The first time that you view results in the Windows® Assessment Console or the Windows® Assessment Services - Client (Windows ASC), the default goals file is used. If you define your own goals you can use the UI to set the custom goals file location and then select the custom goals file that you want to use. You must set the goals file location and add a goals file to that location before you can use the UI to apply the custom goals. Once a new goals file is selected it will continue to be the goals file that is used for any results that are opened. The assessment tools always look for the last goals file that was used. If the goals file is no longer available the default goals file is used.

Only one goals file can be used at a time. Goals for all assessments are set in a single goals file. The assessment tools will search for goals in the following order:

  1. A custom goals file

  2. The default goals file

  3. Goals that are defined in the results file

  4. Goals that are defined in the assessment manifest

You can use the sample goals file that is provided at %PROGRAMFILES%\Windows Kits\8.0\Assessment and Deployment Kit\Windows Assessment Toolkit\SDK\Samples\Goals to create your own goals file.

Note

You cannot package a goals file with a job, but you can store it on a share for others to use.

Metrics

The default configuration of this assessment uses all three workloads, each of which covers some aspects of the browser performance. They are started in the sequence of Fish Bowl, Blizzard and Speed Reading. A single sequence of running through all the workloads is referred to as an iteration. By default, the assessment performs three iterations. The first iteration is for analysis, and the other iterations are timing iterations. During the timing iteration basic information is collected to calculate the metrics in the trace file. During the analysis iteration, information is collected to diagnose and detect common performance issues, such as long-running DPCs and ISRs. All the issues displayed in the UI are generated from the analysis iteration.

The metrics value shown in the UI is an average value for all iterations. If you adjust the assessment setting to run more than 3 iterations, the values returned is still an average of all iterations. To see the values for individual iterations, in the Results View, right-click the results column header and then click Show iterations. Usually the results for the analysis iteration are slightly worse than the consecutive ones because it is a cold run and there is modest tracing overhead.

You can examine the metrics and sub-metrics to find potential performance issues.

  • Fish in FishBowl

  • Snowflakes

  • Speed Reading Score

  • Assessment Environment

Fish in FishBowl

Most applicable to: Driver developers, Application writers

The HTML5 FishBowl workload uses common Web technologies to see how many fish Internet Explorer can animate in real-time (60fps). By default (when the Fish Count setting is set to Auto), fish are added if the frame rate is above 60fps, and when the frame rate is below 60fps, fish are removed, until the equilibrium is reached at 60fps. Alternatively, if the Fish Count is set to a constant, the assessment measures how many frames per second Internet Explorer can render for the specified number of fish. In this case, to make sure that FPS is stabilized, the workload runs for 30 seconds. The assessment records the metrics every second.

The following table provides a brief description of the metrics that the assessment captures when it uses the Fishbowl workload. When you run multiple iterations of the workload, the value is an average.

Metric Description

Fish in fishbowl

The number of fish that the computer can draw in the fishbowl. If you selected a specific number of fish by using the Fish Count setting, this count will be the same as the number that you selected.

If you selected Auto for the Fish Count setting, this number will vary. The workload adds fish until it maximizes the computer's capabilities.

Fishbowl FPS

The number of frames per second that the computer can support while it's running this workload. The maximum number that you should see is 60 fps.

If you set the Fish Count parameter to Auto, the computer will add fish to the bowl until the rate reaches 60 fps or the computer's capabilities are maximized. If you selected a specific number of fish by using the Fish Count parameter, the workload will draw as many frames as it can until the rate reaches 60 fps or the computer's capabilities are maximized.

Note

When the Auto mode is used, in some low-end computers, or if Internet Explorer is configured so that it doesn’t use hardware acceleration, it is possible to have only one fish displayed, because the FPS can never reach 60. Check the FishBowl FPS metric to confirm this.

You can expand the Fish in Fishbowl metric to see the Fishbowl FPS sub-metrics. This is useful in two cases:

  • In manual fish mode, the FPS captures the rendering frequency the browser can achieve with the specified fish count. If the actual FPS exceeds 60, the metric still displays 60 as its upper bound.

  • In auto fish mode when the fish count is one, the FPS captures the actual rendering frequency the browser can reach. FPS is especially relevant for low-end computers or with remote desktop services where the computer can’t render more than one fish.

Typical Influencing Factors

For hardware, this metric can be affected by video decoding, GPU read back, texture sizes, closure code generation, L1 cache, CSS3, and so on. For software, this metric can be affected by computation intensive background tasks, CPU/GPU utilization and Internet Explorer plugins.

This metric is also greatly affected by the Assessment Environment of Internet Explorer when the site is launched. The smaller the window size is, the more fish Internet Explorer can render.

Analysis and Remediation Steps

Long-running DPCs are a typical issue that can cause glitches in the animation and drop FPS in Internet Explorer. Click the WPA in-depth analysis link to view the DPC call stack and find the exact time point when this long-running DPC occurs. Driver developers should pay attention to such issues because a long-running DPC might imply a driver performance bug.

In WPA you can zoom into the FishBowl activity in the activity panel. In the Generic Events panel, the trace events that track the change of Fish Count and FPS are listed under Microsoft-IE provider. The sampling frequency is 1Hz. The MSHTML_DOM_CUSTOMSITEEVENT captures the metrics in a prefix format. FishBowl_FPS_16.0 means that when this event is fired, the FishBowl FPS is 16.0. This helps you identify time interval to investigate. You can try to determine why a metric changes by looking into the recorded activities. Keep in mind that a fluctuation is expected during the first several seconds. Once the metric becomes consistent in the timeline, a sudden and significant value decrease or increase requires an investigation.

When a suspicious metric change is identified, you can select the two events in which the change occurs and zoom into the one-second interval for detailed analysis. In WPA, you can bring in DPC and ISR CPU usage, sampled and precise CPU usage, and hard faults panels to help with the investigation.

Driver developers and application writer should pay attention to the modules used. If the root cause of the change cannot be found, you can enlarge the interval to one or two seconds. The driver plays an important role but the bottleneck could be anywhere. Everything from whether virtualization is enabled, to the speed of the memory, to which AV product is installed, to the display resolution will affect these scenarios. It is possible that no suspicious activity can be noticed during the investigation interval.

If high CPU or disk activity is observed throughout the entire workload, find the root cause of the activity and remove it. Then the assessment can be restarted to verify that the metrics improved.

Additional Information

MSDN Blog: Measuring Browser Performance with the Windows Performance Tools

Snowflakes

Most applicable to: Driver developers, Application writers

The HTML5 Blizzard is used in this assessment to measure how many snowflakes Internet Explorer can animate in 60fps. Similar to FishBowl, snowflakes are added or removed based on current FPS until the equilibrium is reached. The animation is composed of these HTML5 elements:

  • The background sky is made of HTML5 canvas gradient

  • The greeting title is created by using DIV and Web Open Font Format (WOFF) font

  • The snow bank is a Scalable Vector Graphics (SVG) image

  • The snowman is a set of HTML5 canvas images with rotating and scaling

  • The score is HTML5 canvas text

  • The falling snow is made by HTML5 canvas image strip

  • The background music is HTML5 audio

All together these elements give a good representation of the modern website browsing experience.

The following table provides a brief description of the metrics that the assessment captures when it uses the Blizzard workload. When you run multiple iterations of the workload, the value is an average.

Metric Description

Snowflakes

The number of snowflakes that the computer can draw during the Blizzard workload. Higher numbers indicate better performance.

Wind Speed

How fast the wind blows, in miles per hour, during the Blizzard workload. The wind speed is directly related to the number of snowflakes that the computer's hardware supports. The more snowflakes, the faster the wind.

Draw Time

The average time, in seconds, that the computer takes to draw each frame in the Blizzard workload.

The Snowflakes metric reflects how many snowflakes Internet Explorer can render in real-time (60fps). Similar to Fish Count, the Blizzard workload dynamically adjusts the number of snowflakes until FPS stabilizes at 60. Higher snowflake count signifies better HTML5 browsing performance.

You can expand the Snowflakes metric to see the Wind Speed and Draw Time sub-metrics.

  • Wind Speed is the horizontal and vertical falling speed of the snowflakes. It is usually proportional to the snowflake count – the more snowflakes the browser renders the larger the wind speed. Given the same snowflakes, higher wind speed presents better performance.

  • Draw Time is the average elapsed time of drawing a single frame. A 5% buffer around the 16.7ms rolling average is given to account for clock skew and other variables which could destabilize the number. This keeps the number more stable once the equilibrium is reached. Shorter draw time implies better rendering performance.

Typical Influencing Factors

In terms of hardware, this metric is mainly influenced by rendering composition, JavaScript callback frequency, graphics culling, and module pattern. In terms of software, this metric is affected by computation intensive background tasks, CPU/GPU utilization and Internet Explorer plugins.

Same as the Fishbowl assessment, these metrics are greatly affected by the Assessment Environment of Internet Explorer (the smaller the actual window size is, the more snowflakes can be rendered).

Analysis and Remediation Steps

The analysis and remediation steps are similar to those illustrated for FishBowl Fish Count. The Blizzard metrics are also tracked in MSHTML_DOM_CUSTOMSITEEVENT in the prefix manner.

Additional Information

MSDN Blog: HTML5 Blizzard: Full Hardware Acceleration in Action

Speed Reading Score

Most applicable to: Driver developers, Application writers

This metric measures how long Internet Explorer takes in seconds to flip the billboards. Depending on the average draw duration of a single billboard, the number of billboards can differ slightly. The shorter the average draw duration, the more billboards that are flipped. This metric may differ greatly across different hardware and software configurations. It changes more rapidly compared to FishBowl and Blizzard metrics, providing a good opportunity to compare browsing performance across computer configurations.

If the browser can flip the graphics faster than 60fps, the unused CPU time is taken to flip additional characters in a single frame. For example, on some computers, Internet Explorer can compose the entire alphabet multiple times in a single frame. In this case, the Speed Reading intentionally slows down the browser to smooth the animation.

The following table provides a brief description of the metrics that the assessment captures when it uses the Speed Reading workload.

Metric Description

Speed reading score

The score for how fast the computer can speed read. It's based on the time that the computer takes to draw all the billboards in the workload, and the average draw time for all the billboards.

Speed reading FPS

The number of frames per second that the computer can support while it's running the Speed Reading workload. The maximum number that you should see is 60 fps.

Draw Time

The average time, in seconds, that the computer takes to draw each billboard in the Speed Reading workload. 

You can expand the Speed Reading Score metric to see Speed Reading FPS and Draw Time sub-metrics.

  • Speed Reading FPS measures the average frame per second of flipping animation during the test.

  • Draw Time is the rolling average draw time similar to the one presented in Blizzard. As explained previously, if the draw time is shorter than a single cycle (defined as 16.7ms for 60Hz), multiple draws may be performed per single draw loop.

Typical Influencing Factors

This metric is mainly influenced by graphics driver throughput, texture swapping, overdrawing, background compilation, computation intensive background tasks, high CPU/GPU utilization, Internet Explorer plugins and the actual window size of Internet Explorer.

Driver developers should pay attention to high value of this metric (even if FishBowl and Blizzard metrics present a good performance) because this high value may signify a performance bug in the graphics driver.

Analysis and Remediation Steps

The analysis and remediation steps are similar to those illustrated for FishBowl Fish Count. One thing to note is that the Speed Reading Score is not tracked in MSHTML_DOM_CUSTOMSITEEVENT because the score is only available when the test is finished, but the changes of FPS and Draw Time are illustrated in WPA.

Besides the issues listed in the Assessment Results Viewer, A sudden drop of FPS or augmentation of Draw Time may be a good point to start in-depth analysis.

Assessment Environment

Most applicable to: Driver developers, Application writers

This metric captures the environment where the assessment runs, notably the important factors which may affect the other performance metrics.

You can expand the Assessment Environment metric to see a set of sub-metrics.

  • Actual Window Size is the window size, in pixel, of Internet Explorer when the test sites are started. As explained earlier, different window size or screen resolution greatly affect the performance metrics. A smaller window leads to better rendering performance. The user can test different window sizes by specifying Windows Size parameter. By default it is maximized – the window size should equal to the client region size of Internet Explorer when it is maximized in a given screen resolution.

  • Internet Explorer Version is the version of Internet Explorer which the assessment launches. The Internet Explorer Performance assessment supports both Internet Explorer 9.0 and Internet Explorer 10.0. The measured performance may differ in these two versions because of the improvement in script engines, rendering technology, HTML5 support, and so on.

Typical Influencing Factors

The Actual Window Size is influenced by Window Size parameter and the screen resolution. Internet Explorer Version is determined by the version of the Internet Explorer installed on the computer.

Analysis and Remediation Steps

Pay attention to the Actual Window Size every time that you examine the metric results. Keep in mind that a change in Actual Window Size can cause a significant change in the performance metrics.

Issues

This assessment performs advanced issue analysis and provides links to Windows® Performance Analyzer (WPA) to troubleshoot the issues that are identified. The following table describes the kinds of issues and recommendations that appear based on the results of the assessment. In most cases, you can click the WPA Analysis link to troubleshoot the issues that appear. When WPA opens additional details about disk activity or CPU activity might be available depending on the type of issue identified. For more information about in-depth analysis issues and recommendations, see Common In-Depth Analysis Issues.

Issue type Description Recommendation

Deferred Procedure Calls (DPCs)

DPCs that run too long or too often can use lots of CPU time, cause delays in applications, and slow down overall system performance.

Click the link for WPA in-depth analysis to trace and investigate problematic DPC activity.

High CPU consumption

High CPU consumption can cause delays in applications and slow down overall system performance.

Click the link for WPA in-depth analysis to trace and investigate problematic DPC activity.

Interrupt Service Routines (ISRs)

ISRs that run too long or too often can use lots of CPU time, cause delays in applications, and slow down overall system performance.

Click the link for WPA in-depth analysis to trace and investigate problematic ISR and DPC activity.

The assessment reports an exit code of 0x80050006

Maintenance tasks have been registered on the PC and have not completed. This prevents the assessment from running, as maintenance tasks often impact assessment metrics.

Manually initiate pending maintenance tasks with the following command from an elevated prompt:

rundll32.exe advapi32.dll,ProcessIdleTasks

The assessment reports an exit code of 0x80050006

This error occurs when maintenance tasks have been registered on the PC but have not completed before the assessment run. This prevents the assessment from running, as maintenance tasks often impact assessment metrics.

To resolve this issue, do one of the following:

  1. Ensure that the computer is connected to a network and is running on AC power. Manually initiate pending maintenance tasks with the following command from an elevated prompt:

    rundll32.exe advapi32.dll,ProcessIdleTasks

  2. Disable regular and idle maintenance tasks, and stop all maintenance tasks before running the assessment.

See Also

Concepts

Internet Explorer Browsing Performance
Assessments
Internet Explorer Startup Performance
Streaming Media Performance

Other Resources

Windows Assessment Toolkit Technical Reference
Windows Assessments Console Step-by-Step Guide
Windows Performance Toolkit Technical Reference