Was this page helpful?
Your feedback about this content is important. Let us know what you think.
Additional feedback?
1500 characters remaining
Export (0) Print
Expand All

BizTalk Server 2006: Comparative Adapter Study

Microsoft Corporation

September, 2006

Summary: This document reports the results of tests performed on various adapters. Test criteria, procedures, adapter configurations, and results are presented. The data can help the reader determine which adapter configuration is best-suited for their specific business requirements.

This study describes the results of a comparative adapter study—a set of tests that compared each adapter that ships with Microsoft® BizTalk® Server 2006 against its BizTalk Server 2004 SP1 counterpart under identical conditions. Also included are performance test results for the POP3 and Windows® SharePoint® Services adapters, which were not available for BizTalk Server 2004 SP1. The resulting data proves interesting from a solution design standpoint because understanding the performance of each adapter is critical when deciding which adapters to select to meet performance requirements. The test techniques used to arrive at the maximum sustainable throughput (MST) are described in detail, and recommendations for using and configuring specific adapters are provided.

As detailed in the following sections, each adapter was tested using a single adapter instance (also called host instance) in order to level the playing field so a comparison of adapter performance could be made. As a result, the data presented does not represent the maximum throughput achievable for any specific adapter. If appropriate scale-up and/or scale-out capabilities are leveraged, throughput can be increased. For example, when using the HTTP adapter, overall throughput can be increased by creating multiple HTTP receive location instances and load balancing across them (for example, by using an IP load-balancing switch as is typical of Web farms).

The test cases presented here are designed to stress the adapters and gauge their performance in relative isolation. As a result, there are few, if any, components incorporated into the test cases other than the adapters themselves. In a real-world situation, there will likely be many other components (such as tracking, pipeline components, orchestrations, maps, and schemas) that will affect the performance characteristics of the system. The reader is advised that, in scenarios where something other than a single adapter instance is the performance bottleneck, your throughput will likely vary significantly from the results presented here.

The content of the study is primarily intended to help users who are considering upgrading from BizTalk Server 2004 SP1 to BizTalk Server 2006 and want to understand what, if any, performance differences they can expect from the adapters. Also targeted are users who want to get an initial idea of the performance of each adapter relative to the others. A basic knowledge of BizTalk Server is assumed.

From this document, you will learn the following:

  • Performance of each transport adapter under identical circumstances for both BizTalk Server 2004 SP1 and BizTalk Server 2006.
  • Specific techniques for optimizing selected adapters.
  • Test cases and test techniques applicable to any solution deployed on BizTalk Server 2006 or BizTalk Server 2004 SP1.

Test Scenarios

The scenarios used in the adapter tests were designed to be as simple as possible so that the throughput achieved was attributable to the adapter performance and not to any other component in the system (for example, pipeline components or orchestrations). With that goal in mind, only those components necessary to allow throughput to be driven end-to-end through the test cases were used. The following table provides a list of the test cases and a summary of each component in the processing path for each test case.

Test Case Number Receive Path Orchestration Send Path Notes

File to File Local

File adapter, local share, passthrough pipeline

None

File adapter, local file share, passthrough pipeline

Using local and remote UNC shares from which to receive and to which to send are both common scenarios.

File to File UNC

File adapter, remote UNC share, passthrough pipeline

None

File adapter, remote UNC share, passthrough pipeline

Using local and remote UNC shares from which to receive and to which to send are both common scenarios.

File to SMTP

File adapter, local share, passthrough pipeline

None

SMTP adapter, passthrough pipeline

SMTP adapter supports send only, so the file adapter was used to receive for this test.

File to SQL

File adapter, passthrough pipeline

None

SQL adapter, passthrough pipeline

To test SQL send performance only, the file adapter was used to receive since we know file is faster than SQL when a single instance is used.

FTP to FTP

FTP adapter, passthrough pipeline

None

FTP adapter, passthrough pipeline

Simple passthrough FTP scenario. The FTP server provided in IIS was used for both receive and send.

HTTP to HTTP 1Way

HTTP adapter, XML Receive pipeline

None

HTTP adapter, passthrough pipeline

 "1Way" means the send and receive URLs were different, that is, NOT request-response.

HTTP to HTTP 2Way

HTTP adapter, XML Receive pipeline

Simple receive/send

HTTP adapter, passthrough pipeline

"2Way" means the scenario was request-response.

MQSeries to MQSeries Inorder

MQSeries adapter, passthrough pipeline, configured for in-order delivery

None

MQSeries adapter, passthrough pipeline

A separate server was used to run the MQSeries service for the test. No indication that this server was a bottleneck was observed.

MQSeries to MQSeries Non-inorder

MQSeries adapter, passthrough pipeline

None

MQSeries adapter, passthrough pipeline

A separate server was used to run the MQSeries service for the test. No indication that this server was a bottleneck was observed.

MSMQ to File Nontransactional Local

MSMQ adapter, passthrough pipeline

None

File adapter, pasthrough pipeline, local file share

"Transactional/Nontransactional" refers to the configuration of the MSMQ adapter. "Remote/Local" refers to the location of the queue on a remote or on the local server, respectively. Note that transactional MSMQ reads are only supported on local queues.

MSMQ to File Nontransactional Remote

MSMQ adapter, passthrough pipeline

None

File adapter, pasthrough pipeline, local file share

"Transactional/Nontransactional" refers to the configuration of the MSMQ adapter. "Remote/Local" refers to the location of the queue on a remote or on the local server, respectively. Note that transactional MSMQ reads are only supported on local queues.

MSMQ to File Transactional Local

MSMQ adapter, passthrough pipeline

None

File adapter, pasthrough pipeline, local file share

"Transactional/Nontransactional" refers to the configuration of the MSMQ adapter. "Remote/Local" refers to the location of the queue on a remote or on the local server, respectively. Note that transactional MSMQ reads are only supported on local queues.

File to MSMQ Transactional Remote

File adapter, passthrough pipeline

None

MSMQ Adapter, passthrough pipeline, remote queue

"Transactional/Nontransactional" refers to the configuration of the MSMQ adapter. "Remote/Local" refers to the location of the queue on a remote or on the local server, respectively. Note that transactional MSMQ reads are only supported on local queues.

File to MSMQ Nontransactional Remote

File adapter, passthrough pipeline

None

MSMQ Adapter, passthrough pipeline, remote queue

"Transactional/Nontransactional" refers to the configuration of the MSMQ adapter. "Remote/Local" refers to the location of the queue on a remote or on the local server, respectively. Note that transactional MSMQ reads are only supported on local queues.

MSMQT to MSMQT

MSMQT adapter, passthrough pipeline

None

MSMQT adapter, passthrough pipeline

Local queues were used for the receive and send sides. Transactional only.

POP3 to File MIME

MSMQT adapter, passthrough pipeline

None

File adapter, passthrough pipeline

Local file share was used on the send side.

SOAP to SOAP 1Way

SOAP adapter, passthrough pipeline

None

SOAP adapter, passthrough pipeline

"1Way" means the send and receive URLs were different, that is, NOT request-response.

SOAP to SOAP 2Way

SOAP adapter, passthrough pipeline

Simple receive/send

SOAP adapter, passthrough pipeline

"2Way" means the scenario was request-response.

SQL to File

SQL adapter, passthrough pipeline

None

File adapter, passthrough pipeline

Local file was used on the send side.

SSL HTTP 1Way

HTTP adapter, XML Receive pipeline

None

HTTP adapter, passthrough pipeline

Secure Socket Layer (SSL) was configured on the receive HTTP adapter.

SSL HTTP 2Way

HTTP adapter, XML Receive pipeline

Simple receive/send

HTTP adapter, passthrough pipeline

Secure Socket Layer (SSL) was configured on the receive HTTP adapter.

SSL SOAP 1Way

SOAP adapter, passthrough pipeline

None

SOAP adapter, passthrough pipeline

SSL was configured on the receive SOAP adapter.

SSL SOAP 2Way

SOAP adapter, passthrough pipeline

Simple receive/send

SOAP adapter, passthrough pipeline

SSL was configured on the receive SOAP adapter.

Windows SharePoint Services No NLB

Windows Sharepoint Services adapter, passthrough pipeline

None

Windows SharePoint Services adapter, passthrough pipeline

A separate server was used as the Windows SharePoint Services server from which messages were received and to which messages were sent.

SAP to SAP

Microsoft BizTalk Adapter v2.0 for mySAP™ Business Suite (SAP adapter), passthrough pipeline

None

SAP adapter, passthrough pipeline

SAP was installed on a separate server.

Starting with the test cases in the preceding table, note the following test case variations:

  • Two file sizes were used for each test case—200 kilobytes and 2 kilobytes—to give the reader some idea of the effect of file sizes. The data provided below is divided into results for each file size used. These file sizes were chosen because they are representative of common customer scenarios. The reader is advised that file sizes can have significant effect on performance and the throughput versus file size curve is not always linear. A more in-depth analysis of the effect of file size on throughput is beyond the scope of this study.
  • Except for the 2Way HTTP and SOAP test cases, all cases were tested without using orchestration so that throughput is influenced primarily by the adapters. The HTTP and SOAP 2Way test cases each use a simple orchestration containing only a receive shape connected to a send shape. This was done to facilitate the request-response or "2Way" test case that is so common for HTTP and SOAP. The orchestration was published as a Web service in the case of the SOAP adapter tests.
  • The passthrough pipeline was used for all receiving and sending except for HTTP receive, which used the XMLReceive pipeline in order to properly resolve to a schema type.
  • The send and receive adapters for each test case were in separate hosts with the following exceptions:
    • MSMQT – The host must be the same for both send and receive.
    • HTTP and SOAP 2Way – For request-response scenarios such as these, receiving and sending are carried out by the same adapter host instance.
  • The receive adapter differed from the send adapter in the following circumstances:
    • POP3 and SMTP – Only receive is supported by POP3 and only send is supported by SMTP.
    • SQL and MSMQ – Experience has shown that the performance behavior of the receive and send sides of the SQL and MSMQ adapters, respectively, can be significantly different. To avoid "masking" the performance of one side by the other, we tested receive and send performance of these adapters isolated from each other.
    • To facilitate an end-to-end processing of messages, the file adapter, which is known to be one of the faster adapters when receiving or sending to a local share, was used to compliment adapter test cases where just the send or just the receive side was being tested. For example, to test MSMQ send isolated from MSMQ receive, we used the file adapter to receive in the "File to MSMQ" test cases.

Test Topology

In order to negate the effects of hardware on the relative throughput, a common topology (pictured below) was used for all of the adapter tests.

Aa972200.1ccfc5be-1f96-426c-a377-d8f6ebe9eb2e(en-US,BTS.10).gif

Figure 1: Common Test Topology

The topology spans three machines:

  • Server 1: Makes up the BizTalk Server tier, and has two processors running at 3 GHz with 512 KB of L2 cache and 1 MB of L3 cache. It’s equipped with 2 GB of RAM and two 32 GB 15K SCSI drives. BizTalk Server is configured with four hosts, one for receiving, one for transmitting, one for processing orchestrations in the SOAP and HTTP test cases, and one for tracking (although tracking is disabled for the adapter test runs.)
  • Server 2: Makes up the SQL Server™ tier, and runs four 2 GHz processors with 512 KB L2 cache and 1 MB of L3 cache. The server has 4 GB of RAM and two 32 GB 15K SCSI drivers, one of which contains program data and the other holds the LDF files for SQL Server. The machine also has a connection to a SAN drive which hosts the MDF files for SQL Server. Only one BizTalk message box is used. Microsoft SQL Server 2000 SP4 was used throughout.
  • Server 3: Matches the specifications for the BizTalk Server tier machine and is used to generate and send the XML files used to test the system.

Test Procedures

The LoadGen tool, available at http://www.microsoft.com/downloads/details.aspx?FamilyId=C2EE632B-41C2-42B4-B865-34077F483C9E&displaylang=en, was used to transmit the files to the receiving adapter. For the MQSeries adapter, LoadGen dropped files directly into the MQSeries queue – this was done to retain the end-to-end nature of the other tests, although a significant performance increase was observed when the queue was preloaded with the entire test run’s input data ahead of time. For all cases, output documents were continually deleted from the transmit location using a looping script, and all virus scans were disabled to prevent interference.

The goal in each test run was to determine the maximum sustainable throughput (MST) for each adapter in an identical environment. To do so, we changed the "message count in database" threshold to 2000 for each host on the throttling thresholds configuration page. We set the monitoring component of LoadGen to adjust the rate of document creation to meet a goal of maintaining 300 to 500 documents in a receive queue or 3000 to 5000 documents in a receive folder. With these settings, BizTalk Server was allowed to reach a sustainable state internally by means of built-in throttling while LoadGen attained an acceptable rate of file creation by monitoring the input backlog. For details about measuring MST, including data on throughput profiles and their effect on database backlog, see the topic "Engine Performance Characteristics" in the BizTalk Server 2006 product documentation.

We allowed each test to execute for one hour while PerfMon.exe collected performance data about the run. After the run finished, we analyzed the performance data from the 15th minute to the 48th minute (25% to 80% of the total run time) and we used the average of the documents-processed-per-second performance counter to indicate the maximum sustainable throughput for the run. By taking the "heart cut" data from the dataset and avoiding the "boundary" data (that is, startup and shutdown periods), we more accurately emulate a longer sustained throughput such as that experienced by systems that run indefinitely under load.

Recommendations

For the most part, all test cases were run with the default setting for the operating system as well as for BizTalk Server and the other peripheral services such as SAP and MQSeries. In some cases, we have found that adjusting the default setting affords higher throughput in our test cases. We used the following configurations and techniques and advise the reader to consider these when configuring your system:

  • When high throughput results in memory-based throttling for larger-sized messages, we recommend that you drop the in-process message count for the send host (available on the throttling properties for the host in the administration console). For this adapter case study, we applied this only to the SMTP 200 KB test run and changed the setting to 100 from 1000.
  • For all tests with the HTTP and SOAP adapters, we increased to three the number of IIS worker processes for the application pool hosting the receive adapter. Each worker process can contain up to 125 active threads. If the number of threads approaches this limit, there may be a bottleneck that you need to alleviate. To make this change, open the Internet Information Services (IIS) Manager and right-click the application pool hosting the HTTP receive DLL or SOAP Web service. Select "properties", and click the "performance" tab. Raise the value of the "Maximum number of worker processes:" field under the "Web garden" section as necessary.
  • For all tests that included the HTTP and SOAP adapters, the number of connections allowed at one time occasionally caused a bottleneck (by default, this is set to 2). To remove this limitation, we added the following XML to the BTSNTSvc.exe.config file in the BizTalk Server product directory, effectively bumping the maximum number of connections to 400.

<system.net>

<defaultProxy>

<proxy autoDetect="false"/>

</defaultProxy>

<connectionManagement>

<add address = "*" maxconnection = "400" />

</connectionManagement>

</system.net>

  • The stored procedure used to select data for the SQL Receive adapter must be written is such a way that it avoids database deadlocks, especially when multiple threads are reading from it at the same time. To accomplish this, we added a GUID field called HasBeenRead to the data table; it indicated whether a particular row had been read by the SQL Receive adapter. Here's how it worked: The stored procedure doing the data selection first generated a new GUID and placed its value in the HasBeenRead field for a select number of rows using UPDLOCK READPAST to ignore all rows currently locked for update. It then selected all rows with this GUID to be received by the adapter. In doing so, each row was guaranteed to be read only once, and there were no contention on locks when multiple threads were used by the SQL Receive adapter to select data from the table. The pattern we used in the SQL receive test case looks like this:

set nocount on

set xact_abort on

set transaction isolation level read committed

declare @ReadID uniqueidentifier

set @ReadID = NEWID();

update tblOrders with (rowlock)

set HasBeenRead = @ReadID

from (select top 1 OrderId

from tblOrders WITH(UPDLOCK READPAST)

where HasBeenRead is null)

as t1

where (tblOrders.OrderId = t1.OrderId)

select OrderID, TextData

from tblOrders with(NoLock)

where HasBeenRead = @ReadID

for xml auto

The following table lists the throughput results in documents processed per second for each adapter when run with Microsoft BizTalk Server 2004 SP1 versus BizTalk Server 2006 for 2 KB files.

Test Case Name BizTalk Server 2004 SP1 (documents/sec) BizTalk Server 2006 (documents/sec)

Web-Based Adapters

 

 

HTTP to HTTP 1 Way

97.3

105

HTTP to HTTP 2 Way

35.3

42.7

SOAP to SOAP 1 Way

90.1

100

SOAP to SOAP 2 Way

40.7

43.2

SSL HTTP 1 Way

89.3

104

SSL HTTP 2 Way

32.1

34.8

SSL SOAP 1 Way

90.1

99.1

SSL SOAP 2 Way

41.1

42.3

Queuing Adapters

 

 

MQSeries to MQSeries Inorder

64.8

67.5

MQSeries to MQSeries Non-Inorder

142

156

MSMQ to FILE Nontransactional Local

214

233

MSMQ to FILE Nontransactional Remote

129

181

MSMQ to FILE Transactional Local

145

211

File to MSMQ Transactional Remote

205

215

File to MSMQ Nontransactional Remote

271

245

MSMQT to MSMQT

70.3

84.3

Other Adapters

 

 

File to File Local

271

271

File to File UNC

161

141

File to SMTP

31.9

142

File to SQL

118

121

FTP to FTP

4.70

4.71

POP3 to File MIME

NA

72.0

SQL to FILE

109

83.9

Windows Sharepoint Services No NLB

NA

4.13

SAP to SAP

12.0

11.0

The following table lists the number of documents processed per second for each adapter when run with Microsoft BizTalk Server 2004 SP1 versus BizTalk Server 2006 for 200 KB files.

Test Case Name BizTalk Server 2004 SP1 BizTalk Server 2006

Web-Based Adapters

 

 

HTTP to HTTP 1 Way

11.6

26.0

HTTP to HTTP 2 Way

4.41

5.87

SOAP to SOAP 1 Way

60.0

50.6

SOAP to SOAP 2 Way

24.8

27.8

Queuing Adapters

 

 

MQSeries to MQSeries Inorder

5.45

9.31

MQSeries to MQSeries Non-Inorder

8.23

16.8

MSMQ to FILE Nontransactional Local

45.6

47.0

MSMQ to FILE Nontransactional Remote

16.5

23.5

MSMQ to FILE Transactional Local

32.6

36.1

File to MSMQ Transactional Remote

11.8

17.6

File to MSMQ Nontransactional Remote

15.9

19.0

MSMQT to MSMQT

28.5

39.0

Other Adapters

 

 

File to File Local

43.4

47.3

File to File UNC

12.8

25.3

File to SMTP

14.4

19.5

File to SQL

18.3

22.3

FTP to FTP

4.41

4.58

POP3 to File MIME

NA

7.55

SQL to FILE

11.8

14.6

Windows Sharepoint Services No NLB

NA

2.51

SAP to SAP

5.67

6.00

The result data is most easily viewed when split into three categories:

  • Web-based adapters, which include all forms of the HTTP and SOAP adapters, including the 1 and 2-way variations and those using SSL.
  • Queuing adapters, which include MQSeries, MSMQ, and MSMQT
  • All other adapters, which include File, FTP, SMTP, SQL, POP3, SAP, and Windows SharePoint Services.

This classification is particularly effective given that the SOAP and HTTP adapters both use simple orchestrations in their 2Way test scenarios while the others do not, and it is therefore more interesting to view those two adapters directly against each other.

The following three charts (Figures 2 through 4) graphically compare the throughput results achieved for both versions of BizTalk Server using 2 KB files.

Aa972200.dee25736-a0ad-4a94-b113-1ae9705d37ae(en-US,BTS.10).gif

Figure 2

Aa972200.1e26891f-095d-4c26-8172-23f10a15068a(en-US,BTS.10).gif

Figure 3

Aa972200.25bcb042-bd8c-4e16-a157-d13987ca87b9(en-US,BTS.10).gif

Figure 4

From the preceding graphs, note the following:

  • The BizTalk Server 2006 adapters are as fast or faster than the BizTalk Server 2004 adapters in nearly every case. Significantly improved adapters include:
    • SMTP – Approximately 4 times faster than the previous version due to specific improvements and optimizations implemented in BizTalk Server 2006.
    • UNC File Receive/Send – For larger message sizes, specific improvements in large message handling account for most of the nearly 2 times improvement for the 200K case. Interestingly, the 2K file to file UNC test case on BizTalk Server 2006 is only 88% of that on BizTalk Server 2004.
    • HTTP – The HTTP adapter was improved across the board, especially in the HTTP to HTTP 1Way 200K test case, where it is almost 2 times faster on BizTalk Server 2006.
    • MQSeries – Improved across the board, but especially for the 200K document size where it is 1.7 to 2 times faster on BizTalk Server 2006 when using a single instance.
    • MSMQ – Shows improvement in nearly all cases. Not surprisingly, nontransactional is always faster than transactional and local is always faster than remote under the same conditions.
  • CPU utilization is not provided in this study; however in general it was high (80 to 90%) on the BizTalk servers. This is a good indication that the adapters were indeed the bottleneck for the system, as intended. Surprisingly, when SSL was added to the SOAP test cases (BizTalk Server 2006 only; see Figure 3), the remaining CPU headroom was sufficient to allow SSL processing without lowering the MST.

The data presented in this study is a good indicator of relative adapter performance for a single adapter instance. As such, it also gives you an idea, with regards to the adapters, of what to expect when upgrading a solution from BizTalk Server 2004 SP1 to BizTalk Server 2006. However, because each adapter scales differently with respect to throughput when additional servers and/or CPUs are added, the MST achievable when each adapter is scaled will vary significantly.

Show:
© 2015 Microsoft