BizTalk Server 2006: Managing a Successful Performance Lab

Doug Girard

August, 2006

Summary: This document illustrates and identifies ways in which to test and evaluate servers before going live in a business setting. Step-by-step testing protocol is emphasized and demonstrated. Checklists and resource lists/links provide additional direction. Multiple scenarios are explored and illustrated.

Part of every application’s pre-production testing should include performance and stress testing. You should know the limits of your platform and your applications with certainty prior to ever receiving or sending a live message in production use.

Performance and stress testing can also intersect with capacity planning. You should understand up front what type of hardware you’ll require and how much will be critical for your business. Also, if your company has very strict requirements for volume or latency with constraints on hardware, proving that the system can meet the required goals with expected hardware will be key.

This document exists to provide some general guidance around approaching performance labs with BizTalk Server 2006. For specifics, we have made reference to correlated documents that already exist on the web or in the product documentation for additional assistance.

Performance is a topic that needs to be considered from the very beginning of a BizTalk project’s life cycle. As our Planning for Sustained Performance: Project Planning Recommendations by Phase documentation article explains, performance needs to be firmly positioned in the Requirements, Design, Implementation, Verification, and Release milestones of your project.

Part of this project plan should budget in the necessary time and resources to conduct a full BizTalk Server performance lab for the solution that will run in production. Before beginning your performance lab engagement, you should have already:

  1. Established performance criteria
  2. Identified performance risks
  3. Estimated sizing
  4. Acquired detailed throughput and latency profiles
  5. Investigated performance risk mitigations
  6. Refined system size estimations
  7. Reached design and code implementation completion
  8. Completed your functional testing, including all smoke testing, sanity checks, regression tests, and multi-machine trials.

Now that you are actually ready to begin the performance lab, developing a well thought-out strategy and a methodology for approaching the engagement will go a long way towards achieving success. A successful performance lab will include the following elements:

  1. Defining the Scope
  2. Constructing a Plan
  3. Technical Preparation for the Lab
  4. Lab Build-out
  5. Lab Execution
  6. Concluding the Lab

Each of these points will be elaborated upon in the following sections. The overall lab process might resemble the following depiction.


Figure 1: Fictional Contoso Performance Lab Process


  • Define the abstract / executive summary.
  • What is the background for this lab?
  • What are the prioritized goals for the lab?
  • What is the bar of success?
  • What is an acceptable "call it quits" point?
  • Create a high-level architectural diagram as part of the lab documentation.
  • If third-party systems are used, record each respective system design at a high level.

Defining the Purpose

The benefits to conducting a BizTalk Server performance lab should already be obvious to you and your team, but communicating this to the business is often a critical step, justifying the time commitment and the cost. Below are a few examples.

Abstract / Executive Summary:

Contoso’s Operations team will be conducting a performance lab on the Order Fulfillment application between January 1 and January 14, 2006. We plan to upgrade the Order Fulfillment application from BizTalk Server 2004 to BizTalk Server 2006. While we are code complete with the migration, it is still yet to be proven that BizTalk Server 2006 can handle the new volume requirements mandated by the business for the next three years of operation. With a target go live date of April 15, 2006 the Operations team will be using this lab opportunity to attempt to meet the current load requirements, the projected load requirements, and later obtain the maximum sustainable throughput (MST) for the environment with different hardware configurations. Specific goals for the lab are outlined in the Goals section to follow within this lab manual.

Purpose Statement:

The purpose of this lab engagement is to prove to the business that BizTalk Server 2006 can comply with our performance guideline requirements for the next three years of operation.

Setting Goals and Success Criteria

Successful performance labs can often go awry in the absence of quantitative, measurable goals. Goals should be used as motivation and these points should be revisited periodically during the lab to monitor progress. Goals should also lead to establishing success criteria, the minimum bar for completing a rewarding engagement. Below is a comprehensive example.

Goals Overview

  1. The primary purpose of the lab is to ensure BizTalk Server 2006 can perform in parity with their current BizTalk Server 2004 environment by being able to process 12M messages within a 12-hour period sustainably. Second to this is proving that BizTalk Server 2006 can perform well against the performance needs expected three years from now, meeting 20M messages within a 12-hour period sustainably. This is independent of the server classes chosen and topology arrived upon. Abbreviated tests will be carefully constructed to simulate this daily performance goal.
  2. After the primary objectives have been proven, regardless of the topology of machines, the lab’s focus will shift to trying to optimize the performance of the system with different hardware configurations, obtaining the Maximum Sustainable Throughput (MST) on different hardware configurations. Later, the business will be able to choose their hardware knowing the upper limit of each topology.
  3. Finally, if there is extra time, recovery, administrative, and failover cases will be attempted.

Performance Goals

  1. Process 12M messages within 12 hours at a steady input rate. Any hardware configuration.
  2. Process 20M messages within 12 hours at a steady input rate. Any hardware configuration.
  3. Process 20M messages with load exactly simulating production profile, with peaks and troughs. Any hardware configuration.
  4. Establish MST numbers for different hardware configurations.
  5. Resume 1M failed messages in the DB while under 400 message/sec load and recover to steady state processing with 1 hour.
  6. Drop a BizTalk Server node from the group under 400 message/sec load and observe healthy system BizTalk Server behavior.

Success Criteria

  • Goals 1, 2, and 4 are required for lab success and the lab will not culminate until completed.
  • Goals 3, 5, and 6 are optional if time remains, otherwise they will be conducted at a future time before our production window comes around.

Performance Profile Specifics

    • 12M messages/day is the current performance requirement
    • Per day actually means within a 12-hour trading window
    • This means 12M inbound, 15M outbound
  • LAB
    • 20M/day (averages to 463 msgs/sec in 12-hour period)
    • Notes: They expect 25% year–over-year growth in transaction volume, so finding the max sustainable throughput through BizTalk Server is an important lab exercise
    • Currently up to 2500 messages/sec (inbound and outbound)
  • LAB
    • No different
Hours of Operation
    • Open 24 hours, need to be able to handle transactions at any hour
    • Bulk of transactions come in during normal business hours (12-hour period)
    • Downtime possible on Sunday morning (12 AM to 6 AM)
  • LAB
    • No different
Additional Requirements
  • They don’t have any tight SLAs but need to make sure that the system is “sustainable” which they define as “make sure that all of Monday’s data is reconciled by EOD Monday (midnight)”
  • Should test for error cases where a machine is knocked out

Other Goals

  • Have fun!
  • Get a minimum of 8 hours of sleep per night.

Remember, performance criteria should have been arrived at very early in the application life cycle (i.e., the Requirements Phase) and should be easily translated into lab goals.

Performance criteria should always be future-thinking and should account for the year-over-year growth in transaction volume expected by your business.

Documenting the Scenario

Documenting the entire scenario end-to-end brings everyone onto the same page during the lab, identifies the scope of the application(s) to be tested, and creates a paper trail for historical reference. Below are some approaches to properly documenting your scenario from a performance perspective, above and beyond what you might normally have in project specifications.

Basic Questioning

Question Scenario Profile

Basic Scenario Definition


Do you have a purge messaging scenario?


What transports do you use inbound?


What transports do you use outbound?


Do you use DTA tracking?


Do you use BAM tracking?


DO you use orchestration?


Do you use BRE?




How many message per day do you normally receive inbound?


How many messages per day does this normally result in outbound?


What is the volume on a slow day?


What is the volume on a heavy day?


What is your expected year over year growth in transaction volume?


What is the maximum peak of messages that you will receive inbound in a given interval?


Hours of Operation


Do you need to be able to handle a load 24 hours a day, 7 days a week, 365 days a year?


Do you have planned downtimes each week? For what duration?


Message Types


Do you receive XML messages?


Do you receive Flat File messages?


Do you receive other document formats?


Additional Requirements


Do you have strict Service Level Agreements (SLAs)? Some examples are:

  • All Monday’s messages must be processed completely by start of business day Tuesday.
  • 90% of files must be processed in 1 hour, 100% must be processed in 1 day.
  • 10KB files must be processed in under 30 seconds, 100KB files must be processed in under 2 minutes.


What are your data maintenance plans, i.e., archiving, purging, disaster recovery?



  • Describe in exhaustive detail the hardware and topology used for the lab end-to-end.
  • Create reference posters and hang these in the room of the lab. Mark these posters with changes made during the week.
  • Plan to maintain a shared Lab Manual which will contain comprehensive information on the scenario and lab, and will be updated daily with status and progress made.
  • Construct a conservative day-to-day schedule which builds in buffer.
  • Create a project plan and keep it updated.
  • Schedule all lab resources far in advance.
  • Schedule the necessary personnel to ensure staffing is never a bottleneck, including developers and any Subject Matter Experts (SMEs) who oversee third-party systems that may be involved in the solution.

Outlining Hardware and Topology

Creating an inventory of the hardware available to you and the hardware planned to be a part of the topology is one of the first steps in beginning a lab engagement. Settling on machine and hardware device naming conventions can also go a long way toward creating a smooth lab experience.

Very likely, your hardware topology will change during the course of the lab, so having daily updates posted on the walls or on a projector will avoid confusion due to changes. Keeping track of all of your changes, in both hardware and software, and the justifications for each change, should be recorded methodically in your Lab Manual. A spreadsheet, daily notes, or even diagrams can be used.


Figure 2: A Possible Lab Topology

The above diagram is an oversimplified depiction of an environment’s topology. Ideally, you’ll also want a very verbose diagram which includes host distributions and service descriptions, machine specifications (name, manufacturer, CPUs, memory, local disks, IP Address, network card/s, operating system), endpoint details, hardware device specifications, switch specifications, SAN specifications, and the like.

Proposing a Lab Timeline

Setting appropriate times to meet throughout the lab, especially with the extended team, is an exercise which should be done up front. Laying out a skeleton of a schedule is also a good idea and can help keep your team on track, but a performance lab will almost never track exactly to schedule. Therefore, be sure you are creating a conservative timeline which builds in buffer and time for unexpectedly arduous troubleshooting. Below is an example. During the lab this project plan should be updated to reflect the latest information and remain fresh.

Meeting Schedule


Activity Schedule


Figure 3 and4: Establishing a Conservative Activity Schedule

It is also very common in performance testing to discover a performance bottleneck in custom code or orchestration that needs redevelopment. And frequently custom testing or measuring tools need to be adjusted after analysis. Factoring in some extra time for redevelopment in the cycle and having developer resources available on hand during the lab is a "must-have".

Establishing a Recording Methodology

With your team, agree upon some standards for recording, data collection, and daily status reporting. Dividing up these responsibilities among team members is often a good idea. Using a common Lab Manual to archive these findings is a must. During the lab, convening at the end of each day to summarize the progress made and agree upon key findings will help drum a consistent rhythm for the lab, and will aid with daily status reporting.

Below is a fictitious sample of a daily status report sent to the extended team.


Key take-aways:
  1. Gigabit network is needed, otherwise network saturation occurs on at least the destination queue machine’s NIC.
  2. We believe we should collapse the data layer to two physical machines with three message boxes, one sharing the master message box, but continuing to separate the LUNs on the SAN.
  3. 2 receivers do not get pegged at 400 doc/s total but are maxed at 500 docs/s (250/s each). To achieve greater than 400/s we should go to 3 or more receivers.
  4. We haven’t yet seen the senders as the bottleneck and would like to reduce the other bottlenecks and see if we can get rates of greater than 65-70 doc/s out of each of them.
Topology Changes:
  1. Went to three message boxes on three different SQL boxes today.
  2. Gigabit switches were installed.
Resolved Issues:
  1. SSPI Context Errors resolved. Removed the machine entries from the Active Directory Domain Controller, removed SQL machines from the domain and re-added them. (May be related to first using a Local System account and then switching it to a Domain Account.)
  2. Two more Intel Xeon 2-ways acquired and built out.
  3. Throttling settings understood a bit better after a call with PSS.
New Issues:
  1. We need more than just 2 receivers. We cannot keep up with the volume of messages coming in, especially when running off of local disks.
  2. There are currently no maintenance plans, and recommendations are needed from the extended Ops team.
  3. May need two more 2-ways prepared.

Test Run Results


Went to three message boxes

On 3 SQL boxes

300/s looked perfectly clean, aborted run to bump up load


400 documents/second

200/s at each receiver

2 receivers,6 senders,3 SQL


Recv – 65,65

Send – 99, 99, spiky 65

Master – 10-15

MB2 – 31 (8 way)

MB3 - 53


Recv – 198,198

Send – 65

Spool – 25,000 and 25,000 at end

Little back up in MSMQ recvs

20 min run

30 sec for MSMQs to drain

3:15 to complete

16.23% catch up

Definite Bottlenecking at NIC on destination MSMQ; need gigabit switches

No throttling on recv until end of run, then DB

Rate throttling on send

H2 of run, memory and unprocessed


Et cetera …


  • Describe the solutions to be tested on a very granular level, such as at a message-flow level.
  • Have third-party SMEs meticulously document their system details.
  • Create message histograms and arrive at a representative data set for lab testing.
  • Define the discrete use cases to be tested.
  • Create a detailed performance profile.
  • Receive solution artifacts from all respective teams prior to the lab start date.
  • Ensure build and deploy instructions are also available.

Recording the Detailed Solution Design

Defining the BizTalk Server scenarios to be tested and their individual performance profiles in thorough detail is a necessary prerequisite for beginning a performance lab. This will help to scope the lab, allow your team to easily settle on discrete use cases to test, and create the needed data to test with.

Detailed Questioning

Question Scenario Profile

Scenario Complexity


How long do your orchestrations run (secs, mins, days, months)?


How many subscriptions are there (static and dynamic)?


How complex are the subscriptions (convoys, number of items)?


How many orchestrations are initiated per request?


How many transactions and persistent points are used?


How many messages are created inside of BizTalk per request?


Scenario Complexity - Messaging


How many subscriptions are there?


What is the complexity of the subscriptions (in-order, convoys)?


Scenario Complexity - Maps


How many transformations are called?


How complex are the maps used?


Are your maps memory- or processor-heavy?


Scenario Complexity - Other


Are your SQL databases dedicated to the BizTalk solution or shared among other solutions?


Is your data subsystem dedicated to the BizTalk solution or shared among other solutions?


Are there other, more sophisticated specifics to your solution that are worth mentioning?


Additional Requirements


Do you have budget limits, e.g. the cost for hardware and software licenses needed for that hardware cannot exceed $750K?


Do you have minimum hardware size requirements, e.g. we don’t put anything into production that is smaller than an 8-way?


Do you have maximum hardware size requirements, e.g. we will only use 2-proc blade servers and scale out?


Do you have restrictions on a maximum number of servers, e.g. our department charges back by our IT department for the number of machines we have in deployment (multiplied 4 fold for the different environments - production, pre-production, integration testing, development)?


Do you have configuration limitations, e.g. we will not use NLB only Switch Load Balancing or we only use 3rd party clustering tools?


Do you have environment limitations? Using the same type of hardware that will be used in the production environment is critical to reproducing performance numbers. This includes not only machine type (CPU type, #CPU, memory, etc), but also switches, SANs, # of firewalls. The performance lab may influence some of what hardware should be purchased but there are some parts of the build-out that are not in control of the department/team that is building the project.


Are there architectural requirements that would affect performance, e.g. Geo-clustering, remote SAN mirroring, etc.?


Creating a Message Histogram

What is the breakdown of message sizes and types received during the average day? Perhaps 50% are 1KB XML, 25% are 10KB XML, 15% are 100KB XML, and 10% are 10KB FF (which expand to 100KB XML).

Create a histogram of the data to be expected and then arrive at a representative sample of messages which can be used for Load Generation while still closely simulating the production message profile. See the figure below.

Size Bucket Actual Qty Percent Ten Msgs Rounding Chosen Size

< 1







































































Average 15.2KB

Defining Discrete Use Cases

Conceivably, your BizTalk Server implementation only supports one primary use case, perhaps the sending of all messages through a set of specific maps and onto one destination endpoint. In other cases, your solutions may be more complex. If they are more intricate in nature, identifying all of the different use cases will assist with test case creation and help to predict the effects of inbound load on the system.

The example below shows a simple content-based routing solution in which all messages reach the same destination queue (100%), but a small portion (roughly 10%) are also mapped and routed to a second back-end system based on properties of their messages. This means that a daily inbound volume of 1M messages will actually result in 1.1M messages outbound, and hence an additional processing burden on the send side.

Message Type Message Percentage Destination Queue Processing Needs

All Product Orders



Passthru MSMQ

Orders Over $1000



Mapping SOAP

In still more advanced cases, multiple orchestration flows, or even multiple applications, may be involved simultaneously. Documenting these use cases and their load proportions helps to define all of the needed test cases and properly set message volumes.


Figure 5: Use Case A: Receives 25% of the System's Daily Volume

Creating a Detailed Performance Profile

It is very important to create a performance profile for your average day and for your busiest day. Analyze the message flow during a typical 24-hour period. What is the volume of messages per 30-second slices of time throughout the day? The result should be something like the chart below.


Figure 6: Actual Message Volume Profile

Next, create another chart with discrete "buckets" of message sizes (in significant ranges) as different series, such as depicted below.


Figure 7: Message Volume by Message Size

Now that you have these profiles established and well understood, it is much easier to create performance and stress tests. You can translate these profiles into actual test cases which can be run with LoadGen or another stress tool. Examples of some of these performance patterns that you’ll want to turn into test cases are described below.

Steady Flow

A flat and steady rate of data, probably set at a rate equal to your average messages processed within a given processing period, makes for a very common and elementary test case.


Figure 8: Steady Flow Volume

Weekends and Lulls

Testing for all of your load patterns, even expectedly low volumes, is important. Systems can behave unexpectedly under low load and could potentially violate service level agreements unless carefully tested for.


Figure 9: Low Volume Pattern

Ensure your application will still meet expected SLAs and performance goals, even under uncharacteristically low volume.

Simulated Testing

Arriving at a pattern more representative of your actual live data profile is no easy task and requires careful analysis of your actual performance profile. How many messages arrive and at what rates? Does this amount differ during the day? Do you get single or double camelback-shaped humps or does the profile remain steady all day long? When do you expect peaks and troughs, and what rates are these peak numbers at? Do you get volume surges only at certain hours? What is the distribution of message sizes? Does this distribution change throughout the day? Et cetera.

You may be able to simulate this load pattern with load generators such as LoadGen. If you currently have the messages arriving into another system, you may also be able to replay an actual day’s captured load or route/duplicate the day’s normal messages over to the testing environment. In the end, you should succeed in producing a repeatable load profile which can actually simulate your production load.


Figure 10: Typical Volume Profile


You should also consider different times of the year. Would you expect your volumes to rise during holidays or drop during vacation times? If so, be sure to create test cases to handle these changes in volume.


Figure 11: Holiday Volume Profile

Recovery Processing

Another test case often overlooked is simulating a period of downtime. Downtime may be a regular occurrence within your infrastructure and testing for recovery processing with queued-up messages will be necessary.


Figure 12: Recovery Profile

Edge Cases

There may be other infrequently occurring edge cases that would still warrant performance testing. Be mindful of application upgrades, resuming and terminating instances, and any other operational tasks that might be part of long-term ownership of the infrastructure and would have to run while processing load. Make certain these scenarios are considered for inclusion in the capacity testing.

In general, a careful and comprehensive examination of your load profiles and scenarios throughout the day, the week, and the year, and testing for these cases during the lab will ensure that BizTalk Server can predictably handle all of these patterns.

Assembling Solution Artifacts

Finally, making sure all solution artifacts are code complete and thoroughly tested before the lab begins is a must. You do not want to be wasting cycles during the lab debugging code for functional issues. Having these artifacts available before the lab will also allow you to test them briefly during the lab build-out stage to iron out any deployment problems and ensure the actual lab hits the ground running, which segues to our next section.


  • Obtain all build-lab infrastructure at least a week in advance of the lab start date.
  • Configure third-party software systems.
  • Configure the BizTalk Server environment.
  • Validate implementation.
  • Write automated load tests.
  • Configure performance monitoring.
  • Establish and document the solution’s performance baseline.

Build-out Checklists

There are times during the lab when you should try to establish near-robotic practices to reduce errors, and the lab build-out is one of them. Checklists should be used whenever possible—for machine creation and configuration, overall environment construction, and application deployments, just as a few examples. The following sections elaborate on some of these.

Note   To create checklists, use the product documentation and other references (see the "Additional Resources" section) to arrive at comprehensive checklists that are customized for your implementations. Below are some typical types of checklists that should be created and followed before and during the lab. The specifics of these checklists are only to be used as illustrations, and are by no means documentation or intended to be followed as standard, complete, or comprehensive guidance.

Network and Administrative Configuration

Consulting with your network administrators, creation of Active Directory® domain accounts and groups might have a checklist which resembles the following.

Domain Controller Created (Optional)
Domain Accounts Created
  1. DOMAINNAME\Northwind BizTalk Application Users
  2. DOMAINNAME\Northwind BizTalk Isolated Host Users
  3. DOMAINNAME\Northwind BizTalk Server Administrators
  4. DOMAINNAME\Northwind BizTalk Server Operators
  5. DOMAINNAME\Northwind SSO Administrators
  6. DOMAINNAME\Northwind SSO Affiliate Administrators
  1. DOMAINNAME\Northwind Account
    1. Password: ********
    2. Membership in all groups & local admin on all boxes
  2. Administrator (local)
    1. Password: ***********

Some companies will prefer to create a separate domain for the purposes of the performance lab, eliminating other network contention, but others may not go to these lengths.

In general, permissions issues can often slow down progress in a lab and are fundamentally tangential to the purpose of the exercise. Running with administrative privileges should not be done in production, but might be a reasonable idea for a performance lab.

Also, be sure to schedule time with your lab technicians to assemble the right mix of network hardware, configure switches, subnets, and VLANs, and test for correctness and fitness.

SQL Server Creation

Consulting with your SQL Server™ and SAN administrators, SQL Server and related database setup guidelines might look something like the following.

Install OS
  1. Windows Server 2003 ENT SP1
  2. Pagefile separated
  3. Set the /3GB Switch on the SQL Servers
Install all Prerequisites
  1. Enable DTC
Install SQL Server 2005
Install SQL Server 2005 Service Pack 1
Use SQL Aliasing
Enable AWE - Min 13GB, Max 13GB
Pre-create BizTalk DBs
  1. BizTalkMsgBoxDb [Primary] On the SAN
    1. S: DATA (10GB initial)
    2. T: LOG (10GB initial)
  2. BizTalkMsgBoxDb [Secondary] On the SAN
    1. S: DATA (10GB initial)
    2. T: LOG (10GB initial)
  3. BizTalkMsgBoxDb [Secondary] On the SAN
    1. S: DATA (10GB initial)
    2. T: LOG (10GB initial)
  4. BizTalkMgmtDb (U:)
  5. BizTalkDTADb (U:)
  6. SSODB (U:)
Ensure that SQL Agent Running and Jobs Enabled (after the BizTalk group is created)

The performance of a BizTalk solution is often highly dependent upon the performance of the SQL Server databases hosting the BizTalk Server Message Database, Tracking Database and BAM database. For this reason, it is beneficial to have access to resources that can create a high-performance SQL Server environment. In particular, it is critical to have available resources who can align the SQL Server databases and available SAN storage to ensure that SQL Server is making best use of the storage subsystem.

Note: Similar steps might also be taken for other machines in the topology, such as a dedicated MSMQ box.

Third-party System Build-outs

There may be other third-party systems which need to be built out and configured before the lab can begin. If subject matter experts (SMEs) are required for these systems, be sure they are scheduled during the build-out and lab execution stages. Be sure they thoroughly document their build-out procedures as well.

BizTalk Server Creation

Creation of Enterprise Single Sign-on (SSO), the BizTalk group, and all of the BizTalk Server boxes might follow the guidelines outlined in the following sections.

Enterprise SSO Server--Master Secret Server

This step installs SSO only and NOT the full BizTalk Server environment. This is a special case of the SSO service and can be installed before all other BizTalk servers.

Install OS
  1. Windows Server 2003 ENT SP1
  2. Pagefile partition separated
Install Prerequisites
  1. .NET Framework 2.0
  2. SQL Server 2005 Client Tools
    1. SQL connectivity components and SQL Server Management
    2. Make sure SQL native client tools for TCP/IP is enabled
  3. Enable DTC / DTC Testing
Install BizTalk Server 2006 for SSO
Custom Configure BizTalk Server 2006
  1. Configure SSO Service ONLY
BizTalk Group Creation

The first BizTalk Server box in the group to perform installation and then walkthrough configuration will thereby be responsible for initial creation of the BizTalk group.

Install OS
  1. Windows Server 2003 ENT SP1
  2. Pagefile partition separated
Install all Prerequisites
  1. .NET Framework 2.0
  2. SQL Server 2005 Client Tools
    1. SQL connectivity components and SQL Server Management
    2. Make sure SQL native client tools for TCP/IP is enabled
  3. Enable DTC / DTC Testing
Install BizTalk Server 2006
Custom Configure BizTalk Server 2006
  1. Create initial group and associated databases
  2. Ensure that pre-created database location is selected for the Master message box (not the default locations)
Create two additional message boxes on the pre-created secondary databases
Install any additional BizTalk adapters
Install any BizTalk Server hotfixes or service packs needed
Additional BizTalk Servers

Now that the BizTalk group has officially been created, other BizTalk servers can simply join the existing group. Below is a sample checklist for installing and joining additional BizTalk Server boxes.

Install OS
  1. Windows Server 2003 ENT SP1
  2. Pagefile partition separated
Install Prerequisites
  1. .NET Framework 2.0
  2. SQL Server 2005 Client Tools
    1. SQL connectivity components and SQL Server Management
    2. Make sure SQL native client tools for TCP/IP is enabled
  3. Enable DTC / DTC Testing
Install BizTalk Server 2006
Custom Configure BizTalk Server 2006
  1. Join to existing BizTalk Server group
Install any additional BizTalk adapters
Install any BizTalk Server hotfixes or service packs needed

The first box which configures by joining the group may also choose to export these JOIN settings to an XML file for importing on successive BizTalk Server boxes also joining the group. This may expedite the overall group creation.

Application Installation

Deploying the actual applications can be the most error-prone part of the build-out so be sure to take the appropriate time and diligence in ensuring this is done properly (and is well documented).

  1. Create Hosts
  2. Create Send/Receive Handlers
  3. Create Host Instances
  4. Create Application/s
  5. Application Installation
    1. Deploy BizTalk Binaries to Group
    2. Import Bindings to Group
    3. GAC BizTalk and non-BizTalk binaries on all boxes
    4. Ensure dependency components exist on all boxes
  6. Install dependency applications
  7. Configure transports and physical endpoints
    1. Install MSMQ Services
    2. Install MSMQ QFE – KB 908926
    3. MSMQ private queue creation
    4. Create file shares and assign correct permissions
  8. Startup Services
  9. Perform Basic Smoke Testing

Miscellaneous Configuration

Afterwards, there may be additional BizTalk Server-related configurations required. Below is a sample checklist that might be used by one such company.

  1. Tracking Disabled at the group level
  2. Ensure all machine times are being properly synchronized
  3. DTC Testing
  4. Ensure DTC logging is off
  5. Ensure any custom tracing/logging is disabled unless absolutely needed
  6. LoadGen Installed and Configured
  7. Performance Logs Setup
  8. Performance Monitors Setup
  9. Setup debugging box for stepping through custom code
  10. Check for unnecessary MSMQ tracking or logging
  11. Defragment all local disks
  12. All virus scanning off / uninstalled
  13. Backup the SSO secret

Note   The 64-bit machines in the group may have special setup considerations.

Validation and Troubleshooting

Once the entire environment is constructed, you may want to use some of the tools available from the BizTalk Server product team and the community to help validate the success of the build-out, troubleshoot, and document it. A list of helpful resources is available in the following table.

Tool Name Description

Microsoft BizTalk Server 2006 Best Practices Analyzer

The BizTalk Server 2006 Best Practices Analyzer examines a BizTalk Server 2006 deployment and generates a list of issues pertaining to best practices standards for BizTalk Server deployments. The Best Practices Analyzer gathers data from different information sources, such as Windows Management Instrumentation (WMI) classes, SQL Server databases, and registry entries. The Best Practices Analyzer uses the data to evaluate the deployment configuration. The Best Practices Analyzer does not modify any system settings, and is not a self-tuning tool.

BizTalk Assembly Checker

This GUI tool is located on the BizTalk Server 2006 CD under \Support\Tools\x86\BTSAssemblyChecker.exe and ensures that your BizTalk assemblies are properly synchronized across all the servers in your group.

BizTalk SSO Configuration and Troubleshooting Tool

SSO issues are usually related to network settings, MSDTC settings, service settings, account permissions, account group memberships, etc. This tool generates a report about this information. It also dumps out all mappings of all affiliate applications. Optionally it can also output BizTalk Server configuration as data items, and check on a problematic mapping. This tool does not perform some self-diagnosing or provide the option for fixing on the spot. In order to understand the report, one should have some basic understanding of the SSO architecture.

  1. This tool works with the SSO component of BizTalk Server, not other SSO components.
  2. You need to log on as a member of the SSO Administrator group to run the tool on a master secret server or a secret server.
  3. External credentials including BizTalk Server configuration will be revealed in clear text in the report.

UK SDC BizTalk 2006 Documenter (community)

Creates compiled help files for a given BizTalk Server 2006 installation. This tool can be run on an ad-hoc basis using the UI or from the command line as a post build/deploy task to create a compiled help file describing a BizTalk Server 2006 installation. It will compile: BizTalk Host configuration, send/receive port configuration, orchestration diagrams, schema and map content, pipeline process flow, adapter configuration, rule engine vocabularies and policies, and more, and publish them as compiled help files. Optionally you can embed custom HTML content and custom descriptions for all BizTalk artifacts to produce a more customized look and feel to the CHM output.

Final Build-out Steps

Preparation for the lab should also include constructing automated load test scripts, configuring performance monitoring and logging, and establishing a performance baseline on a basic hardware configuration. Techniques for completing each of these steps are described in the "Lab Execution" section to follow.


  • Conduct testing by running automated tests.
  • Evaluate and document the test results.
  • Modify the configuration of BizTalk Server, third-party systems, or solution artifacts to tune for performance based on the results reviewed.
  • Record all changes made to the environment.
  • Repeat this cycle until sufficient goals for the lab are met.
  • Meet at the end of each day to discuss the ground covered, the lessons learned, the issues solved, the new issues opened, and the plan for the next day. Send out status updates to the team nightly by e-mail.
  • Update the project plan with timeline goals met or failed.
  • Remember to have fun!

Conducting Testing

If you currently have production-load arriving into another system (perhaps a system you are about to upgrade or replace) you may be able to replay an actual day’s captured load or route/duplicate the day’s normal messages over to the testing environment.

However, this approach may not allow for the control and repeatability that is often desired in a performance lab. To obtain a more scientific approach, load-generation tools can give you the ability to produce predictable and repeatable load patterns, measure throughput and latency with accuracy, and still closely simulate actual production volumes with precision.

The following sections offer some suggestions for establishing such a testing regime and producing load.

Running Automated Tests

Internally, the BizTalk Performance and Stress teams use a homegrown tool called LoadGen for their testing. In order to encourage this type of testing technique and help customers and partners in the field conduct these test runs, this tool was released to the web as a free download. The following sections provide some useful information about setting up and using this test application.

LoadGen Setup

LoadGen can be downloaded from:

This tool should be used in a test environment only, and should not be used against a production environment. This tool is provided "as-is" and is not supported.

As indicated on the download page, LoadGen requires the following prerequisites, so make sure these are installed on the box before attempting the LoadGen installation:

.NET Framework 2.0

.NET Framework 2.0 Software Development Kit (SDK)

Another important point to bring up is use of this tool with MSMQ. This transport is supported, but the LoadGen installer does not auto-register the MSMQ COM components during installation since the MSMQ runtime service may not be installed on everyone's machine. To use the MSMQ transport with LoadGen, you’ll need to manually register the MSMQTransmitter.dll and ComMsmqMonitor.dll files located in the <InstallDirectory>/Bins folder, from a command line as shown:

> regsvr32 MSMQTransmitter.dll

> regsvr32 ComMsmqMonitor.dll

If you do not register the components, and intend to use MSMQ, you will receive the following runtime errors:

Cannot Load Transport DLL C:\Program Files\LoadGen\Bins\MSMQTransport.dll for Section MSMQRxQTxn. Exception has been thrown by the target of an invocation.

LoadGen need not be installed on the same box as a BizTalk Host instance. In fact, it is generally a better practice to install and run LoadGen on a separate and dedicated box to externalize its processing impact from BizTalk Server.

LoadGen is not supported on a 64-bit operating system, so make sure the LoadGen client is running on a 32-bit operating system.

LoadGen Basics

Once installed, the command-line application can be run from the <Install Directory>/Bins folder where you will find LoadGenConsole.exe. This application takes as input an XML configuration file which specifies the load profile to be created. The documentation is quite comprehensive in this area, so be sure to read it. There are also samples included which are worth a look.

Within this configuration file, there is the notion of "Sections" and it's not obvious, but these are run in parallel with each other. This allows you to create intersecting load patterns. Something to keep in mind is that generating load does not come for free. Be sure to keep a close handle on your system resources during these tests and determine if it's more appropriate to have another LoadGen instance running on another machine to achieve the desired load.

LoadGen supports a number of the native BizTalk adapters and has extensibility options to allow for creation of your own adapter harnesses. Samples for using the included adapters and for using custom transports are included in the LoadGen documentation.

In addition, if you need to dynamically change the content of each message sent, you can use the Message Creator feature (scoped within each section) to change each schema instance differently. This could commonly be used to generate unique message identifiers, etc.

LoadGen Tips and Tricks

The following are some additional tips and tricks that may help with your testing of BizTalk solutions with LoadGen:

  • As previously explained, LoadGen is generally better run on separate and dedicated boxes apart from BizTalk Server. This practice will externalize LoadGen’s processing burden and better simulate production load.
  • Be sure to include LoadGen machines in your performance monitoring (with PerfMon) as well. These boxes may also be susceptible to resource limitations.
  • The following formula may be useful when deciding on how to specify the LoadGen configuration:

Figure 13: LoadGen Formula

Achieving Simulated Testing

Part of creating a sustainable BizTalk solution may involve a number of "housekeeping" tasks normally run on the live production system. These may include your default Archiving and Purging scheme, Backup and Restore procedures, SQL Agent Jobs, production logging, etc. All of the routine processes will need to be in place in a production system, so if the goal of the testing is to mimic a real-world production system, these tasks should also be included as part of the tests.

Using a Test Run Checklist

Before conducting a test run, it is also a best practice to develop a setup checklist to ensure no stone is left unturned and all runs begin with consistent configurations. The following example checklist might be used to prime the environment before each run. The tasks are arranged by the "layers" of the logical architecture from LoadGen test harnesses at the top to senders at the bottom.

Layer Task Completed?


Ensure Correct Send Rate



Purge Outbound MSMQ Queues



Clear event logs


Receive Servers

Purge MSMQ Queues / Restart MSMQ Services



Receive Locations Enabled



Receive Hosts Restarted



Ensure that system resources are unutilized



Clear event logs



MessageBoxClean executed



Purge Tracking Database



Recycle log files



Check DB's disk usage is low



Clear event logs


Send Servers

Send Hosts Restarted and Enabled



Send Ports started



Purge Outbound MSMQ queues



Clear event logs



Performance Logs started


Run-time Checklist

Equally important is consistently monitoring the environment during the runs. The following might be a number of manual work items to perform while the load testing is in flight.

BizTalk Servers
  1. Check machine event logs for processing errors.
  2. Monitor the BizTalk group with the Admin MMC Console for failures.
  3. Monitor related PerfMon performance counters
SQL Servers
  1. Check machine event logs for processing errors.
  2. Monitor related PerfMon performance counters

Monitoring strategies are elaborated in the "Monitoring and Reviewing the Results" section to follow.

Conducting Throughput Testing

Depending on your application’s performance requirements, either throughput-based or latency-based, your testing methods can change dramatically.

The documentation article What and Wayne Clark’s blog entry Understanding BizTalk Server Throughput and Capacity provide approaches to conducting successful throughput testing. The advice is to attempt to find the maximum sustainable throughput (MST) of the current configuration by slowly increasing the message volume until signs indicate that you have an unsustainable system. If properly followed, this plan also helps to identify the points of bottlenecking, and after eliminating each, the pattern is again repeated, finding the next MST value. Eventually, you will push the throughput higher and higher until you arrive at a point where the current topology cannot sufficiently handle more volume without making significant remediating changes.

LoadGen is a good tool for throughput-centric testing. When a run completes, it will output the sending rates and statistics about the documents sent.

Using PerfMon to monitor a destination queue’s incoming rates (e.g. ‘MSMQ Service:Incoming Messages/sec’) on the other end of BizTalk Server, as an example, may be sufficient to see if BizTalk Server is keeping up with the inbound rate sustainably. You might also have luck monitoring the ‘BizTalk:Messaging: Documents Processed/Sec’ counters on each of the BizTalk send host instances in comparison to the sending rates of the load generator. Be warned that receive rates published by BizTalk Server (‘BizTalk:Messaging:Documents Received/sec’) will not be equal to the sending rates of the load generators, as these numbers include time for receive-side processing such as adapter, pipeline, and map execution. While these may be interesting to watch for bottlenecking, they should not be confused with the inbound throughput rates.

However determined, watching the outbound rates in comparison to the inbound rates (from a BizTalk Server perspective) will give you some idea of how BizTalk Server is performing. If flow rates in equal flow rates out, then you have a sustainable system. However, if your outbound rates are less, then it may be an indication that changes are needed somewhere in this messaging tier. Watching for backups at front-end queues, time-outs, or unrecoverable growth of BizTalk Server database tables will also indicate that the load is too high for the current configuration.

When in an overdrive condition, the Spool Size (i.e. ‘BizTalk:MessageBox:GeneralCounters:Spool Size’) will grow as messages are placed on the spool pending processing. Note that the Spool Size reported is per Message Box and needs to be summed across all Message Boxes to determine if/when spool-based throttling will occur.

The host queue length size (i.e. ‘BizTalk:MessageBox:HostCounters:Host Queue – Length’) can also be used to give a more granular view of the number of messages being queued up internally, by showing the queue depth for an individual host. This counter can be useful in determining if a specific host is bottlenecked. Assuming unique hosts are used for each transport, this can be helpful in determining potential transport bottlenecks.

Going any further would be duplicating the volumes of guidance already present in the core product documentation and would be imprudent, so the Performance and Capacity Planning sections of the core product documentation are your next stops on the road to conducting successful throughput testing.

Conducting Latency Testing

Latency testing has a much different approach than throughput testing. In throughput-centric applications, requirements typically come in the form of "BizTalk Server must be able to process 12 million messages within a 24 hour period sustainably", although this is oversimplified. Latency testing, on the other hand, typically has requirements which resemble the sample given below. Therefore, measuring the inbound and outbound throughput rates is not sufficient to determine if application performance requirements are being met.


Performance Requirements



Throughput Rate:

100 msgs/sec

Avg. Message Size:



Requests and responses are asynchronous.

Average Latency: < 300ms

Required Roundtrip Times:

  • 90% of messages in less than 500 ms
  • 95% of messages in less than 1 sec
  • 99.8% of messages in less than 2 secs
  • 100% of messages within 5 secs

Roundtrip times are measured from the time the request enters BizTalk Server to the time the response leaves BizTalk Server and is sent back to the client minus the time it spent in the back-end server.

In simple synchronous request-response scenarios, having a load generation client which is capable of time-stamping the initiating message and then time-stamping the correlated response message would be required. However, using high-resolution time stamps is still critical. After the run, aggregating all of the results will determine if SLAs have been met for the entire message set.

In more complicated scenarios, where asynchronous requests begin at one server and responses are returned to another, more sophisticated methodologies will have to be adopted. Exact time synchronization between these different machines can be a concern. Message correlation can also be a challenge. And one might also wish to snapshot timestamps at different stages in the end-to-end processing to analyze bottlenecks or to isolate BizTalk Server from other components susceptible to slowdown.

Alaeddin Mohammed and Kevin Lam’s Performance Tuning for Low Latency Messaging white paper offers some strategies for conducting a lab with such rigorous requirements. The paper and the following diagram illustrate how times could be captured and calculated by clients, BizTalk Server pipeline components, and test harnesses, and carried along by the messages.


Figure 14: Possible Latency Measuring Approaches

Other companies may wish to explore options with the Document Tracking and Administration (DTA) or Business Activity Monitoring (BAM) features of BizTalk Server, especially if already using these components in the scenario.

Remember, with latency-mindful solutions, try especially hard to isolate BizTalk Server components in the times being measured. If roundtrip delays are introduced by outside components or resources, you may spend unnecessary and possibly unfruitful effort trying to optimize the BizTalk Server platform for lower numbers.

Alaeddin Mohammed and Kevin Lam’s Performance Tuning for Low Latency Messaging white paper, although developed for BizTalk Server 2004, still remains the definitive source on low latency guidance with regards to BizTalk Server. The Troubleshooting Message Box Latency Issues topic in the BizTalk Server 2006 core documentation is also a must-read.

For further information about optimizing solutions for low latency, be sure to pursue the resources listed in the "Additional Resources" section of this paper.

Monitoring and Reviewing the Results

Some tests will be architected to run for only short durations. Still others will run overnight, for 24 hours, or even longer for stress purposes. Developing a monitoring strategy up front, one which is based on a strong foundation of accuracy, is important for the team to agree upon.

Performance Monitoring

In an attempt to be as scientific as possible, complete performance metrics should be logged and backed up for every run using Microsoft Performance Monitor (PerfMon), or an equivalent. This should include metrics for all BizTalk Server boxes, SQL Server boxes, and other machines that are part of your solution.


You might create performance logs sampling at something like 15-second intervals, or perhaps longer in proportion to the run duration. Ideally, dedicating one separate machine for performance logging for the group is probably a good idea.

For repetitive counters across numerous machines, one tip is to add all counters for one of the BizTalk Server boxes, save off the logging configuration file as an .HTM file, and then manually edit the file (say, in WordPad) to add additional BizTalk servers that are part of the group. Be careful—the format is finicky with some fields maintaining fixed widths (see the figure below). Be sure to also update the CounterCount field after making changes. Also be aware that certain hosts may not be present on certain boxes in your topology, so unless you are adding all counters on all hosts (*), this may be an additional configuration step.


Figure 15: Quickly Editing the PerfMon Configuration

Another tip is to use the same line thicknesses for similar machine types or counters. This will allow for a more readable console.

Live Monitoring

You should also consider using one live console for ad-hoc monitoring during runs. The "View Graph" and "View Report" features of PerfMon are especially useful for in-flight monitoring. Creating multiple consoles, each specializing in some class of monitoring and zeroing in on similar counters (e.g., CPU across all boxes) will also improve readability.

What to Track

The BizTalk Server 2006 Performance Counters topic explains in detail each of the counters exposed by the adapters, the messaging engine, the Message Box, and other components of BizTalk Server, including BAM, BRE, BAS, etc.

For insight into what counters to track specifically, consult the BizTalk Server Performance blog at and, of course, the core documentation. Based on your type of scenario and testing, there are recommendations for particular metrics to be watchful of and values that should raise eyebrows.

Engine throttling is also something to be aware of. More sophisticated engine throttling behavior has been introduced in BizTalk Server 2006 which prevents more unrecoverable situations from occurring by closely monitoring many facets of the BizTalk Server runtime and adjusting processing accordingly. Easily configurable throttling parameters exist on the host level allowing for tuning at a fine level of granularity. The Host Throttling Performance Counters topic explains how to watch for the appearance of host throttling and the significance of these events.

Custom Performance Counters

During performance labs it can often be useful to create custom performance counters against your own databases or even gather additional BizTalk Server database metrics.

As an example, one may wish to regularly measure the depths of the BizTalk Server Parts or PartZeroSum tables as a finer measure of sustainability. To enable logged measurement of these table depths, a simple stored procedure can be written and called periodically by a SQL Agent Job (say, once a minute), as the example below illustrates.






DECLARE @Parts int

DECLARE @PartZeroSum int

SELECT @Parts = count(*) FROM Parts WITH (NOLOCK)

exec sp_user_counter1 @Parts

SELECT @PartZeroSum = count(*) FROM PartZeroSum WITH (NOLOCK)

exec sp_user_counter2 @PartZeroSum


For more on custom SQL user counters see the SQL Server, User Settable Object article in the SQL Server Books Online.

While this may help in performance lab diagnostics, this approach is not generally recommended for use in a production environment.

Other Tips

Restarting host instances at the beginning of each run will ensure performance counters (and hence aggregations) are properly reset. Clearing the PerfMon display will also ensure cached data is discarded.

BizTalk Server Monitoring

Monitoring the health of your BizTalk servers during and after a run is also necessary. Errors may be logged in the Windows® Application event logs, messages may become suspended, etc. Given that you will need access to many machines and the fact that it is hard to keep track of them all, it is a good idea to properly plan for this. Below are a few suggestions for how to reduce the management burden and multi-machine challenges.

Single Management Console

In multi-machine topologies, management of the many machine statuses can quickly become unruly. Creating a single Microsoft Management Console (MMC) incorporating several plug-ins will make conducting a lab much easier.

For starters, open MMC.exe and add a snap-in for BizTalk Server Administration. This will allow you to monitor the entire BizTalk group and benefit from features like its Group Hub Page, bulk suspend/terminate/resume operations, and bulk host instance restarts.

Thereafter, add an Event Viewer plug-in for each and every BizTalk Server machine in the group. Adding Event Viewers for the SQL Server boxes and other machines in the topology may also be handy. Plug-ins for other applications, such as Internet Information Services (IIS) and Microsoft Message Queuing (MSMQ), might also be a good fit for this MMC.


Figure 16: A Truly One-Stop-Shop MMC

When done, save this console .MSC file to the desktop of a monitoring machine, perhaps the same machine used to perform the performance counter logging. This one-stop-shop console will allow you to remotely monitor all boxes from one location for application errors and identify aberrations.

Single Terminal Services Console

If you don’t plan on spending your lab weeks in the confines of a server room, then using Microsoft Terminal Services (TS) is a common way to remotely administer machines in the operating topology. The Terminal Services team smartly offers its own MMC plug-in, called "Remote Desktops" which allows users to add multiple connections under one snap-in. This makes it very easy to context-switch between machines without having to repetitively open up the TS client.


Figure 17: A Terminal Services MMC

You may even want to incorporate these plug-ins into your single management console mentioned in the previous section.

SQL Server Monitoring

If you are using SQL Server 2005, then monitoring has become greatly simplified. The introduction of SQL Server Management Studio, a graphical and integrated environment for accessing, configuring, managing, administering, and developing all components of SQL Server, can be close to a one-stop-shop for all things SQL Server. The console even allows you to connect to multiple SQL Server instances so that all of your administration for the entire environment can be hosted from one box.

SQL Server Management Studio has a great feature called the Summary Page, which exists for various types of database objects. For databases themselves, the Summary Page’s Reports feature is very useful. It can create disk utilization reports which show, down to the table level, the physical growth of your databases. This can be helpful in identifying bottlenecks in BizTalk Server or custom databases.


Figure 18: SQL Server 2005 Summary Reports

When problems do arise at the data tier, viewing the servers’ application event logs and viewing SQL Server Error Logs can help diagnose a problem.

SQL Server Profiler is another helpful tool that can help to trace the server’s function, even under high load. You can aggregate job duration times, monitor stored procedure executions, etc. Just be advised that this will have a performance impact on the SQL engine, so use this tool sparingly and only when needed for troubleshooting.

For optimizing SQL Server Performance, the SQL Server 2005 Books Online contain volumes of information on the subject, so be sure to have your DBAs read this information.

Other Resource Monitoring

Since your entire solution consists of more than just BizTalk Server and SQL Server, monitoring of other third-party systems, network resources, disk subsystems, and other components of the environment may also be required. Consult your product documentation for each of these to determine the best monitoring strategies.

Tuning for Performance

Many Microsoft developers are well versed in functional testing, but often neglect thorough nonfunctional testing before running applications in production. Conducting a well-executed performance lab is a critical component of this nonfunctional testing and is intended to uncover any problems with your current design, identify hardware limitations, and suggest stabilizing improvements to your overall architecture.

The results of each successive performance run may support decisions to debug your current test cases, deployment, application code, or test scripts. These conclusions may also support decisions to change your topology, upgrade current hardware, scale up or out, modify platform settings (e.g. .NET CLR runtime properties, IIS application pool configurations, operating system configurations), adjust BizTalk Server defaults (e.g. throttling parameters, batch sizes, polling intervals), tune disk subsystems, optimize network resources, explore other hardware alternatives, or refactor code or rewrite code. After each of these changes, the execution cycle may be restarted, returning to conduct further testing, reviewing the results, and removing still more bottlenecks.

For expert guidance on how to monitor environmental resources, detect and remove bottlenecks, modify BizTalk Server settings, and tune the platform for your applications, use the resources in the "Additional Resources" section of this paper. The BizTalk Server product group has realized the importance of providing prescriptive guidance for optimizing your BizTalk applications, but this paper is just the start. The referenced resources will walk you through the “forensics and surgery” of the performance lab and help to ensure that you conclude with a healthy platform that will support your business for years to come.


  • Arrive at sufficient performance lab goals.
  • Present final performance characteristics of the solution.
  • Finish the Lab Manual or Engagement Document.
  • Update the Executive Summary with conclusions drawn from the lab.
  • Conclude the engagement.

The finish line! With all of the proper planning in place, you and your team can conduct a successful performance lab with BizTalk Server 2006. When completed with the engagement, remember to present your findings to the business and celebrate with the team that made it possible!

Below are some pointers to additional resources which will help you conduct a successful performance lab engagement with BizTalk Server.

Documentation Articles

The core product documentation is always the best place to begin. Below are some articles in particular which warrant close reads.

Name Description

BizTalk Server 2006 Online Documentation

Includes a variety of resources that can help you learn to develop, deploy, administer, and use BizTalk Server 2006.

Performance and Capacity Planning

An entire section of the documentation is meant to address the concerns of performance and capacity planning with regards to BizTalk Server 2006. Whether you have worked with previous versions of BizTalk Server or are brand new to the 2006 product, you should become intimately familiar with this section of the docs.

Performance Tips and Tricks

This section provides useful tips for ensuring optimal performance for your BizTalk Server system.

Planning for Sustained Performance

This section describes how to plan, test, and scale your BizTalk Server system along an entire application life cycle, so you always maintain optimal performance.

Project Planning Recommendations by Phase

The goal of this section is to provide a set of recommendations that will help you plan appropriately for a successful BizTalk Server 2006 development project with regards to performance.

Identifying Performance Bottlenecks

This topic explains how to identify and resolve the performance bottlenecks in the BizTalk and Database tiers.

Performance Counters

A technical reference for BizTalk Server performance counters, including the many newly released 2006 counters.

Configuration Parameters that Affect Adapter Performance

This section describes configuration settings that can affect the performance of BizTalk Server adapters.

How BizTalk Server Processes Large Messages

The article provides guidelines for working with large messages in BizTalk Server 2006.

Knowledge Base Articles

The following KB article is also a must-read.

KB Number Description


You experience blocking, deadlock conditions, or other SQL Server issues when you try to connect to the BizTalkMsgBoxDb database in BizTalk Server 2006 or in BizTalk Server 2004.


In Microsoft BizTalk Server 2006 or in Microsoft BizTalk Server 2004, you experience blocking, deadlock conditions, or other Microsoft SQL Server issues when you try to connect to the BizTalkMsgBoxDb database.

Some free services, such as KBAlertz, allow you to subscribe to all KB articles posted by technology and stay abreast of new knowledge Base articles that are published by Microsoft.

BizTalk Server 2006 Documents and White Papers

Outside of the core product documentation, there are some other pertinent articles related to performance and capacity planning which are worth reading.

Name Description

Installation and Upgrade Guides

The installation instructions explain how to install BizTalk Server 2006 on Windows XP, Windows 2000 Server, or Windows Server® 2003 in a single-server or multi-server environment.

Installing SQL Server 2005

The SQL Server 2005 Books Online instruct you on properly installing and configuring the database management system.

Installing SQL Server 2000

A walkthrough for installing SQL Server 2000.

BizTalk Server 2006 Comparative Adapter Study

This white paper describes the results of a comparative adapter performance study—a set of tests that compared each adapter that ships with Microsoft BizTalk Server 2006 against its BizTalk Server 2004 SP1 counterpart under identical conditions. The test techniques used to arrive at the maximum sustainable throughput (MST) are described in detail, and recommendations for using and configuring specific adapters are provided.

BizTalk Server 2004 Documents

Many of the BizTalk Server 2004 articles and white papers are still quite relevant to the 2006 product and illustrate universal strategies that companies should adopt. The following table lists some of the best.

Name Description

BizTalk Server 2004 Performance Characteristics White Paper

This document provides information about the performance characteristics of key Microsoft BizTalk Server 2004 configurations and components, such as messaging, pipeline, and orchestration. This has been updated for BizTalk Server 2006 and included in core documentation, but the 2004 paper is still worth a read.

Performance Tuning for Low Latency

This paper by Alaeddin Mohammed and Kevin Lam describes performance tuning suggestions for low-latency messaging.

Blogs and Web Links

The following table lists Internet resources which continue to be knowledgeable and timely sources of information about BizTalk Server and performance.

Name Description

BizTalk Server Product Team Blogs

Straight from the production team, these blogs present the latest and greatest resources on the BizTallk Server product.

BizTalk Server 2006 Scripts Site

This site continues to be updated with scripts which help with installation, configuration, management, maintenance, and testing of BizTalk Server solutions.

BizTalk Server Developer Center

Online resources for BizTalk Server developers.

BizTalk Server

Online resources targeted at BizTalk Server administrators and IT professionals.

Improving .NET Application Performance and Scalability

This guide provides end-to-end guidance for managing performance and scalability throughout your application life cycle to reduce risk and lower total cost of ownership. It provides a framework that organizes performance into a handful of prioritized categories where your choices heavily impact performance and scalability success. The logical units of the framework help integrate performance throughout your application life cycle. Information is segmented by roles, including architects, developers, testers, and administrators, to make it more relevant and actionable. This guide provides processes and actionable steps for modeling performance, measuring, testing, and tuning your applications. Expert guidance is also provided for improving the performance of managed code, ASP.NET, Enterprise Services, Web services, remoting, ADO.NET, XML, and SQL Server.

Webcasts and Presentations

The following table lists some Webcasts available for download on the subject of BizTalk Server performance.

Webcast Name Description

Implementation and Tuning Best Practices for BizTalk Server Solutions

Learn about the development techniques used in building Microsoft BizTalk Server solutions, including common development patterns and best practices for implementing business processes in BizTalk applications. This Webcast explains how to design for operations by properly factoring your application to support versioning, best practices for versioning your artifacts, and how to optimize your orchestration design for performance.

Presenter: Jeff Nordlund, Program Manager, Microsoft Corporation


Decks and session records should be made available on the TechEd Web site. There is also an optional DVD which can be purchased which includes recordings of all of the conference’s presentations.

Specifically seek out “Building and Maintaining a Performant and Healthy BizTalk Solution” by Lee Graber, “BizTalk Server Capacity Planning” by Wayne Clark, and “Monitoring and Troubleshooting BizTalk Server 2006 Solutions” by Kris Shankar.



Microsoft Biztalk Server 2004 has been architected for high performance and scalability. This Webcast dives into BizTalk Server 2004 performance characteristics and explores real-world lessons from the Joint Development and Early Adopter Programs.


The table below lists some tools provided by Microsoft and the community to assist with different stages of conducting a performance lab engagement.

Tool Name Description


This tool is intended for developers and IT professionals to simulate load against BizTalk Server 2004 or 2006. Using this tool, you can simulate load to instrument performance and stress against a BizTalk Server deployment. In addition, this tool may also be extended by developers to simulate load for custom transports. This tool should be used in a test environment only, and should not be used in a production environment. This tool is provided "as-is" and is not supported.

BizTalk Message Box Clean

This script is intended to return your BizTalk Server test environment to a fresh state with regards to the Message Box between runs. It deletes all running instances and all information about those instances including state, messages, and subscriptions, but leaves all activation subscriptions so that you do not have to re-enlist your orchestrations or send ports.

Note: This tool is not supported on production systems.

Lab Services

The following table enumerates some lab services that the extended Microsoft team provides to assist with conducting performance labs. In all cases, work with your Technical Account Manager (TAM) to apply for an engagement.

Lab Service Description

MSServices Lab Offering: BizTalk Server Tune & Stress

Developing a BizTalk application, setting up the servers and databases, and tuning the application and environment needs to consider many different variables. This solution offering can help customers tune and stress their BizTalk applications to get better throughput and performance out of their systems. Speak with your Technical Account Manager (TAM) for more information.

Microsoft Technology Centers: BizTalk Design, POC and Performance Offerings

MTCs can help customers from all named accounts (Global, Strategic, Major and Corporate), including partners and ISVs with .NET Server revenue potential with BizTalk Server assistance and address possible pain points as follows:

Target customer might need assistance with:

  • Visualizing how BizTalk Server applies to their business challenges
  • Understanding the architecture of a customized BizTalk Server solution
  • Verifying and validating performance, capabilities, or BizTalk Server-based solutions

Possible pain points include:

  • Inexperience developing BizTalk Server solutions
  • Lack of confidence that BizTalkServer can accomplish their business goals
  • Unsure how to approach the design or development of a BizTalk Server solution
  • Dissatisfied with the performance of their current BizTalk Server solution
  • Unsure of how BizTalk Server will perform/scale under load

Your Technical Account Manager (TAM) can help set up an engagement at an MTC near you.

CSD Customer Collaboration Center (Redmond)

The End-to-End (E2E) group is the home of the Connected Systems Division's End-to-End program where we focus on ensuring that BizTalk Server and all Connected System Division products and technologies meet or exceed customers’ expectations during the full life cycle of their interactions with the product. The E2E group runs the CSD Customer Collaboration Center, which is a customer-focused lab used for in-depth customer and partner engagements or projects. The lab is specifically for customer-based engagements that are being run by members of the product team under the sponsorship of the End-to-End team or the Customer Programs Team. Speak with your Technical Account Manager (TAM) to submit a lab request.

Enterprise Engagement Center (Redmond)

The Enterprise Engagement Center, or EEC for short, is a specialized lab located on the Redmond campus. As part of the Windows Server Customer Experience Team, the EEC improves quality, drives capabilities, and accelerates the adoption of Microsoft products by capturing and testing real customer scenarios prior to product release. Customers come to the EEC to pilot deployments, upgrades, migrations, lock-downs, and enhancements. The EEC provides product groups the opportunity to listen to customers as they are setting up their environments and testing. EEC customer scenarios provide real-world input to drive Microsoft engineering excellence. The EEC has hosted hundred of customers over the last several years and has become the gold standard for customer test labs throughout the world. Contact your Technical Account Manager (TAM) to fill out an engagement application form.


As the old saying goes, "Failing to plan is planning to fail". Conducting a successful performance lab will help to ensure a successful and predictable BizTalk Server implementation for years to come. Taking the necessary time up front to plan your performance lab’s approach and methodology will save a lot of time during the running of the lab and increase your chances of success.

For more information