What's New in Web Services Enhancements (WSE) 3.0


Mark Fussell
Lead Program Manager, Microsoft Corporation

June 2005

Applies to:
   Web Services Enhancements 3.0 for .NET

Summary: Web Services Enhancements for .NET (WSE) is a product that enables you to build secure Web services quickly and easily. This article describes the driving goals behind the latest WSE 3.0 release, focusing on the major feature highlights and how these can be used to develop distributed applications based upon the Web service specifications. (28 printed pages)


Design Goals for WSE 3.0
Build Secure Web Services Easily
Reestablishing Sessions
Sessions in Web Farms
Simplified Development of Service-Oriented Systems
Future Proofing and Interoperability


Web services have been around for well over 5 years, at least from a .NET Framework perspective. In the world of distributed computing the driving goals of interoperability and cross platform integration are ensuring that the Web service specifications, otherwise known as the WS-* protocols, continue to evolve, driven by the need for secure, reliable and transacted communication. For a comprehensive overview of the Web services architecture read An Introduction to the Web Services Architecture and Its Specifications.

Web Services Enhancements, or WSE, and often pronounced as "Wizzy," is a developer tool that abstracts you away from much of the specification detail when incorporating security into your connected system applications. WSE is primarily focused on bringing message-level security to distributed application development. In the article Why WSE? scenarios that this enables are described. Essentially it is about more flexible development of application architectures. If you look across the WS-* specifications you see a trend where functionality is being elevated from the transport protocol level (security, reliability, session management) to the message protocol level (WS-Security, WS-ReliableMessaging, WS-SecureConversation) and when combined with agreed specifications for data exchange (XML, XML Schema, SOAP, WSDL) and existing transport protocols (HTTP, TCP, UDP) this enables adaptable and extensible applications that also deliver on the goals on interoperability and integration.

Aaron Skonnard provides a good overview of WSE 2.0 in his article What's New in WSE 2.0 and the majority of this article still remains totally relevant in WSE 3.0. However, you will see in this introductory article that WSE has matured and come of age in version 3.0 to bring greater usability, improved interoperability and simplification when building secure, service-oriented applications.

Design Goals for WSE 3.0

WSE 3.0 is fundamentally a security product for Web services. When the WSE (1.0 and 2.0) project was first conceived, its primary purpose was to show a practical and usable implementation of the emerging WS-* security specifications, such as WS-Security, WS-Trust, and WS-SecureConversation and as a result provide feedback into the standardization process. It was not solely constrained to security also helping drive the other specifications such as WS-Addressing (how to get a message from a sender to its final destination) and WS-Attachments (how to send associated attachments with a message). The WS security specifications have now, to a large extent, solidified and as a result the driving goals for the WSE 3.0 release were not so much to influence emerging specifications (although it does implement more recent specification versions), but more to recognize that Web services have permeated into so many areas of development that it needed to augment the existing Web service support in Visual Studio. Solving and simplifying real-world problems encountered by developers was a primary focus.

WSE 3.0 had the following design goals:

Build Secure Web Services Easily: As well as an easy and intuitive API design, the objective here was to abstract common best practices when securing end-to-end messages. From having spoken to hundreds of existing WSE customers, there emerged five common scenarios for message-level security. These are termed the "turnkey" messaging security scenarios and provide high-level security building blocks, allowing you to concentrate more on the business logic of the web Service in the knowledge that it is secure.

Simplified Development of Service-Oriented Systems Using the Web Service Protocols and .NET Framework v2.0: Continue to provide an easy-to-use programming API abstraction on targeted Web service specifications, introduce recent essential specifications such as Message Transmission Optimization Mechanism (MTOM), take advantage of improvements to the .NET Framework 2.0, and provide an integrated set of tools with Visual Studio 2005.

Future-Proofing and Interoperability: The long-term distributed computing development environment centers on Indigo, and WSE 3.0 puts you on the path to Indigo in two ways. First, WSE 3.0 is guaranteed to be wire-level interoperable over HTTP with Indigo, and second, WSE 3.0 introduces Indigo-like design concepts both around the turnkey security scenarios and building distributed applications on Service Orientation (SO) principles in a fully supported product. WSE 3.0 is also being tested with other vendors' Web services stacks to ensure cross platform interoperability.

Based upon these guiding design goals, the WSE product team set about building the functionality for the WSE 3.0 release.

Note   The samples in this article are taken from the WSE 3.0 QuickStarts, which are located in the \Program Files\Microsoft WSE\v3.0\Samples folder after having installed WSE 3.0.

Build Secure Web Services Easily

This section covers the features that improve the existing WSE 2.0 security features, primarily addressing usability and making common scenarios easier. The strongest emphasis in developing WSE 3.0, apart from interoperability, was simplification of message-level security, thereby improving the usability. This is epitomized by the introduction of the message-level turnkey security scenarios.

Turnkey Security Scenarios

Having seen WSE being used in the wild for well over two years with hundreds of enterprise-level customers building distribute applications, patterns started to emerge from common scenarios for securing messages. These scenarios are named and listed in Table 1 below, with a description and a typical deployment usage. However, they are by no means restricted to just the usage described here, and often depend on other deployment considerations. Typically, you will simply have to pick one of these turnkey security scenarios when building your secure Web service, leaving you to concentrate more on the business logic of the service.

Table 1. The turnkey security scenarios as best practices for message security

Turnkey Security Scenario Description Typical Usage
UsernameOverTransport In this scenario the security protection is performed at the transport level (for example, SSL certificate) and the client is identified via a supplied username and password that is authenticated against a store such as Active Directory, ADAM, or SQL Server. Known person to Service. Calling from the Internet to Internet or intranet where the applications have limited security infrastructure. Often SSL is used on the first leg. with another turnkey security scenario used inside the firewall, such as Kerberos.
UsernameOverCertificate In this scenario the security protection is via the server's X.509 certificate and the client is identified via a supplied username and password that is authenticated against a store such as Active Directory, ADAM or SQL Server. Known person to Service. From the Internet to Internet or intranet where the applications are smart clients (e.g. Windows Forms applications) and a Public Key Infrastructure (PKI) infrastructure is maintained. Windows Forms applications and the necessary certificates can be deployed via the Click Once technology.
AnonymousOverCertificate In this scenario the security protection is via the server's X.509 certificate and the client is unidentified or anonymous; that is, any client with the server's public certificate can communicate securely with the server. Unknown Person to Service. From the Internet to Internet or intranet where the applications are smart clients (e.g. Windows Forms) and a Public Key Infrastructure (PKI) infrastructure is maintained. Since anyone with the server's public certificate can connect to the service, this is limited to either noncritical services or ones where the server's public key is supplied only to a limited set of companies or individuals.
MutualCertificate In this scenario both parties exchange X.509 certificates that are used to secure the data exchange between them. Business to Business. Across the Internet or within the intranet, between machines or application servers.

Can be used for limited peer-to-peer where numbers are not large.

Kerberos (Windows) In this scenario the application is within one or more Windows Domains and Kerberos provides a configurable, security infrastructure. The other benefits of Kerberos are single sign-on and better performance than PKI with X.509 certificates. Kerberos tickets are used for authentication and message protection. Kerberos also supports delegation, which allows a service to execute on behalf of the calling user. Within the intranet where Microsoft Windows machines and Kerberos Domain Controllers (KDCs) can be used for the security infrastructure.

Having seen these five turnkey security scenarios, let's dive deeper into their usage so that you can gain a better appreciation of how they are applied.

Note   Applying security to Web services should typically be a deployment consideration rather than a design-time consideration. In other words, you should be able to write an application that can run inside the intranet with a specific declarative policy file and then re-deploy the same application on the Internet with a different declarative policy file and it will still work.

We are going to explore two of the turnkey security scenarios, UsernameOverCertificate and Kerberos, which are often combined into an end-to-end messaging scenario.

Figure 1 below shows a client application deployed on the Internet that talks to an ASP.NET application server deployed on an intranet inside a firewall. The client application has the server's X.509 certificate installed in order to protect (encrypt) the messages sent to the server. Since X.509 certificates turn out to be cryptographically intensive, it becomes more efficient to use an optimization technique, which is to create what is known as an encrypted key (encrypted with X.509's public key) and optionally derive another key from that encrypted key called a derived key. The derived key is used to sign and encrypt the message. Both the encrypted key and the derived key are included in the message. A derived key is a symmetric key and is cryptographically cheaper to use than the X.509 certificate. The message to the server is signed and encrypted with this client-generated derived key. So that the server can decrypt the received message, the encrypted key and derived key are included in the message, protected with server's the X.509 certificate. Since the derived key is smaller than the message, it is faster to decrypt just this key and then use this to decrypt the rest of the message at the server, including retrieving the username and password (U/P). The U/P is used to authenticate the user and determine their access rights from a store such as Active Directory, ADAM, or SQL Server. Note that the username token is encrypted in the message, which provides password confidentiality.

Figure 1. Combining the UsernameOverCertificate and Kerberos turnkey security scenarios to secure a message sequence from a client to a service

In the scenario in Figure 1, having authenticated the user from the Internet, another service in the intranet is called from the application server, in this case residing on a separate machine. The security deployed for this message is Kerberos, which was obtained either from the supplied U/P mapped to a domain account or more typically a machine account for the application server. The ease-of-use of Kerberos within the intranet is a popular choice. Of course, there may be non-Windows machines on the intranet and whilst Kerberos is drifting towards an interoperable standard security token between vendors (for example WSE and Indigo both have successfully shown interoperability with IBM on Kerberos), for now other forms of security are best used such as the MutualCertificate scenario.

Having called the intranet service to perform some business processing, the application server is now ready to respond back to the client with the result. Since using U/P for cryptographic operations is unwise and insecure (see Securing the Username Token with WSE 2.0 for the reasons why), and in Indigo it is not even possible to use U/P for cryptographic operations, how do we protect the message back to the client? Since the client sent over a symmetric, encrypted key to the server, this same key that is known to the client can be used to protect the message back. So not only does the encrypted key improve the performance of security operations when using X.509 certificates, but it also enables secure responses back to the client without using U/P or requiring the server to have a client X.509 certificate installed. When the client receives the response from the application server it is able to decrypt it with its stored encrypted key, thus ensuring that the whole end-to-end message sequence in secured effectively.

The important point to remember is that although I have described the detail of what occurs in the generation of derived keys to secure the client-to-application server call, the implementation of this is all taken care of for you by the specific turnkey security scenario. You do not have to know or care about the fact that there is a cryptographic optimization occurring, you just need to know which turnkey security scenario is the best choice for you in your deployment environment. This same principle applies to the Indigo security binding element, so that going forward design decisions made with WSE 3.0 are applicable to Indigo.

Having described how the turnkey security scenarios are used, let's now see how they are implemented. In WSE 3.0 this is through policy files.

Policy and the Turnkey Security Scenarios

Before we dive into the new policy files in WSE 3.0, let's reflect on WSE 2.0 today. In WSE 2.0 there was no correlation between the code written to secure a message exchange and the declarative policy files. You had to mentally translate policy into EncryptedData and MessageSignature classes. Although these classes are still applicable and needed in WSE 3.0, there is now a more usable and easier to use programming model for securing messages, by simply applying the policy to the client or service in code. In WSE 3.0, through the use of a Policy attribute or alternatively the SetPolicy method on a WSE-generated client proxy (via Visual Studio's Add Web Reference), policy can be used in code to secure a client or a service. In effect, the imperative and declarative programming models for policy have been aligned to provide uniform programming abstractions.

The code below indicates how to set the policy named "ServerPolicy" onto a .NET Web service called WSSecurityUsernameService (see the WSSecurityUsername sample located in \Security sub-folder). This Web service contains a single [WebMethod] called StockQuoteRequest, which we will use throughout the rest of this article. This Web method accepts an array of stock symbols and returns the corresponding quotes.

[WebService(Namespace = "http://stockservice.contoso.com/wse/samples/2005/10")]
public class WSSecurityUsernameService : System.Web.Services.WebService
   public WSSecurityUsernameService()
public List<StockQuote> StockQuoteRequest([XmlArray(),
    XmlArrayItem("Symbol"] string[] symbols)
// Business logic here

Before we look at the ServerPolicy, let's see how the policy was generated. From Visual Studio 2005, generate a New Web Site project and select the ASP.NET Web Service template, as shown in Figure 2 below.

Figure 2. New ASP.NET Web service in Visual Studio 2005

Once you have created a new Web service, right-click the Solution Explorer in Visual Studio and select the WSE Configuration Tool menu option at the bottom of the context menu. You will now see the WSE 3.0 Configuration tool. Select both the "Enable this project for Web Services Enhancements" and "Enable Microsoft Web Services Enhancements Soap Protocol Factory" check boxes, which then ensure this project uses WSE when processing SOAP messages. Next select the Policy tab, check the "Enable Policy" check box, press the Add button, and type a name for the new policy that is going to be created. We are now ready to generate a new policy through the WSE Security Settings Wizard. After you step past the first page you will see the Authentication page shown in Figure 3 below.

Figure 3. The WSE Security Settings Wizard Authentication Settings page

You will see that that you are first asked to either secure a client or a server application. The Authentication modes are the start of determining which turnkey security scenario best applies to your deployment. Experiment with a variety of different settings and you will see that the wizard steps you though the appropriate questions accordingly. For example, in order to end up with the UsernameOverCertificate scenario, choose the Username Authentication mode and step through the wizard. The end result is the generation of a policy describing your security requirements, as shown in Figure 4 below.

Figure 4. Policy created for the UsernameOverCertificate turnkey security scenario

Pressing the Finish button on this screen generates a Policy file, by default saved with the name wse3policyCache.config. The policy shown below is the one generated from the Security Settings wizard.

    <extension name="usernameOverCertificateSecurity" 
Microsoft.Web.Services3, Version=, Culture=neutral, 
PublicKeyToken=31bf3856ad364e35" />
    <extension name="x509" type="Microsoft.Web.Services3.Design.X509TokenProvider, 
Microsoft.Web.Services3, Version=, Culture=neutral,
 PublicKeyToken=31bf3856ad364e35" />
  <policy name="ServerPolicy">
    <usernameOverCertificateSecurity establishSecurityContext="true" 
renewExpiredSecurityContext="true" signatureConfirmation="false" 
protectionOrder="SignBeforeEncrypting" deriveKeys="true" actor="">
        <x509 storeLocation="LocalMachine" storeName="My"
     findType="FindBySubjectDistinguishedName" />
        <request signatureOptions="IncludeAddressing, IncludeTimestamp, IncludeSoapBody" encryptBody="true" />
        <response signatureOptions="IncludeAddressing, IncludeTimestamp, IncludeSoapBody" encryptBody="true" />
        <fault signatureOptions="IncludeAddressing, IncludeTimestamp, IncludeSoapBody" encryptBody="false" />

If you are familiar with WSE 2.0 policy files, you will see that this WSE 3.0 policy file is considerably simplified. It consists of a collection of named policies with the <policy> element, for example <policy name="ServerPolicy">. Within the <policy> element is the type of turnkey security scenario selected, in this case the <usernameOverCertificateSecurity> element, which has a number of configurable attributes, such as whether to establish a secure session, the message protection order, and so on. Depending on how the policy file is generated, the <usernameOverCertificateSecurity> will either contain a <serviceToken> or a <clientToken>, which describes where to get the security credentials from, depending on the chosen security settings. In the policy example above, the <serviceToken> contains an <x509> element indicating how the requests arriving at the server are protected. The <x509> element has attributes on where to get this token information from, including the name of the store, the store location, and how the X509 certificate should be uniquely identified.

The <protection> element indicates what parts of the message are signed and encrypted for the request, response, and fault messages. For example, all request messages have by default the WS-Addressing header, the security header timestamp, and the message body signed. Only the message body is encrypted. (Encrypting WS-Addressing headers makes it very hard to route messages to the ultimate destination.)

I suggest that you experiment with the other turnkey security scenarios generated via the security settings wizard and then see what is contained in the corresponding generated policy file. The vast majority of them will follow the pattern above.

Policy framework

There will be a future article diving deeply into the Policy Framework of WSE 3.0, but it is worth providing a brief overview here so that at least you know how the turnkey security scenarios fit in. Policy describes an input and an output processing pipeline using a number of assertions that can be added or removed. These assertions can create filters that describe transformations of the XML message as it flows from the wire as a SOAP message to the application and back out. This is illustrated in Figure 5 below

Figure 5. The Policy Framework in WSE 3.0 showing a pipeline of assertions

The security assertions (represented by the orange boxes) occur at the start of the input pipeline and at the end of the output pipeline. Tracing assertions simply write messages to the log files and do no actual transformations of the message. You are able to insert custom assertions anywhere and in any specific order on the pipeline. For example, a custom assertion may validate incoming messages against a known set of XML schemas.

Using the turnkey security scenarios

Having seen the turnkey security scenarios and how policy implements them, now it is time to see them in use. The QuickStart samples that ship with WSE 3.0 contain a set of security samples, including one for each of the turnkey security scenarios. We will delve into the WSSecurityUsernamePolicy sample solution, which shows how to apply the usernameOverCertificate turnkey security scenario.

We have seen the server policy file generated through the security settings wizard and how this is applied via the [Policy] attribute to a Web service. Open the client console application in the same solution. In this client project a WSE-enabled proxy has been generated through the Add Web Reference context menu. A client policy to secure the messages to the server can be set in one of two ways. Either you can call the SetPolicy method on the proxy, for example


Or alternatively you can use the CLR [Policy] attribute on the client in an identical way as used on the server. However, the client proxy files are auto-generated, so any attribute added to the client would be lost if the client proxy were updated via Add Web Reference. Fortunately, the client-generated code uses CLR partial classes introduced in .NET Framework 2.0, meaning that we can apply the [Policy] attribute in another file and let the compiler take care of composing the full class definition. If you look in the WSSecurityUsernameClient.cs file you will see the following commented-out code that shows how to apply the [Policy] attribute on the client using partial classes.

namespace localhost
  public partial class WSSecurityUsernameServiceWse : 
     Microsoft.Web.Services3.WebServicesClientProtocol {}

Let's now look at the rest of the client code in the Run method in this same file.

public void Run()
    // Create an instance of the Web service proxy
    WSSecurityUsernameServiceWse serviceProxy = new

    // Configure the proxy
    ConfigureProxy( serviceProxy );

    UsernameToken token = null;
    bool useCorrectPassword = true; 
    string username = Environment.UserName;
    byte[] passwordBytes = System.Text.Encoding.UTF8.GetBytes(username);

    if (useCorrectPassword)
        string passwordEquivalent = Convert.ToBase64String(passwordBytes);
        token = new UsernameToken(username, passwordEquivalent);
        token = new UsernameToken(username, "BadPassword");

    // U/P are set through code. X509 is set through policy.
    // Set the ClientPolicy onto the proxy

    // Call the service
    Console.WriteLine("Calling {0}", serviceProxy.Url);
    String[] symbols = {"FABRIKAM", "CONTOSO"};
    StockQuote[] quotes = serviceProxy.StockQuoteRequest(symbols);

    // Success!
    Console.WriteLine("Web Service called successfully. Simple view:");
    foreach( StockQuote quote in quotes )
        Console.WriteLine( "Symbol: " + quote.Symbol );
        Console.WriteLine( "\tName:\t\t\t" + quote.Name );
        Console.WriteLine( "\tLast Price:\t\t" + quote.Last );
        Console.WriteLine( "\tPrevious Change:\t" + quote.PreviousChange +

In the code above you will see that the username and password are set in code via the usernameToken class, which is the preferred way rather than specifying these values in the policy file. This U/P is then set via the SetClientCredential method as the client credential to authenticate the client to the server. The policy is set in this case using the SetPolicy method that reads the policy called ClientPolicy from file and finally the StockQuoteRequest method is called with the supplied stock quote symbols. If you look at the client's policy, also saved in a file called wse3PolicyCache.config, you will see that this is virtually identical to the Server's policy, other than the storeLocation from where the server's X.509 certificate is retrieved.

If you turn on WSE Tracing, via the Diagnostics tab on the WSE Configuration Tool, run this sample, and then examine the trace logs, you will see secure messages sent between the client and the server secured using the usernameOverCertificateSecurity turnkey security scenario. All that was required was to generate client and server policy files and apply these using the SetPolicy method and the [Policy] attribute to the client and the server respectively.

Note   WSE 3.0 no longer implements WS-SecurityPolicy, as this specification is in the standardization process and has changed significantly since the WSE 2.0 release, which followed an early draft. Policy in WSE 3.0 should be regarded as a configuration binding for your service.
WSE 3.0 does not implement WS-MEX for metadata exchange, which enables you to determine the policy on a service. Policy files need to be exchanged out-of-band.

We have seen how policy in WSE 3.0 is now aligned between the declarative files and the imperative code, providing a clearer and more consistent development approach to securing a client and a service.

Sending Large Amounts of Data with MTOM

MTOM, otherwise known as Message Transmission Optimization Mechanism, enables you to send binary data efficiently as part of a SOAP message. The key word here is optimization, since to all intents and purpose this is transparent to the developer and simply just happens when enabled. MTOM is a W3C recommendation that replaces DIME and WS-Attachments as the mechanism for sending large amounts of data such as document files and images.

There are three key benefits to using MTOM over the existing technologies.

  1. Security. The primary benefit is that MTOM composes with security (via WS-Security), meaning that the data is secure as well as the SOAP message. With DIME the attachments are not secure (unless you used transport-level security) and by simply using any TCP trace utility or a network protocol analyzer you could see the attached data in plain text.
  2. Reduced Wire Size. With MTOM, binary character values are sent on the wire as a MIME attachment and these are referenced from the body of the SOAP message. Typically, due to specific character ranges allowed in XML 1.0 (for example, most characters < 0x20 cannot be included anywhere in an XML document), binary data is converted to ASCII characters via the base64 encoding algorithm (see http://en.wikipedia.org/wiki/Base64). The resultant base64-encoded data has a length that is approximately 33% greater than the original binary data. MTOM is simply another encoding algorithm that does not suffer from this size expansion and so the wire size is smaller. Wire size is only an issue where bandwidth is a significant enough restriction that it affects the messages.
    Note   Although MTOM reduces wire size when composed with security, it does not reduce the processing time either on the client or the server in order to secure the message. This is because WS-Security requires the data to be converted to base64 to apply normalization and canonicalization algorithms and hence generate the cipher values in order to achieve interoperability.
  3. Simplified Programming Model. With MTOM you do not have to use a separate attachments collection as with DIME in order to send the data. All you do is write the service and then simply indicate that this service supports MTOM encoding in the application's configuration file. Any byte[] types returned from the service are then MTOM-automatically encoded.

Having discussed the benefits of MTOM, let's take a look at a simple Web service that returns binary files from a server to a client. The code below shows the service that returns that file as a GetFileResponse type for a given file name. This code is taken from the BinaryDataMTOM QuickStart sample shipped with the WSE 3.0 samples.

 [WebService(Namespace = "http://stockservice.contoso.com/wse/samples/2005/10")]
public class BinaryDataMTOMService : System.Web.Services.WebService
    //This WebMethod returns MTOM encoded binary data 
    public GetFileResponse GetFile(string fileName)
        GetFileResponse response = new GetFileResponse();
        response.FileName = fileName;
        String filePath = AppDomain.CurrentDomain.BaseDirectory +
  @"App_Data\" + fileName;
        response.FileContents = File.ReadAllBytes(filePath);  
        return response;

// Web Method return type
Namespace = "http://stockservice.contoso.com/wse/samples/2005/10")]
public class GetFileResponse
    [XmlElement("fileName", IsNullable = false)]
    public string FileName;

    [XmlElement("fileData", IsNullable = false)]
    public byte[] FileContents;

The GetFileResponse type is where the MTOM magic is applied, since the FileContents is returned as a byte[], achieved by simply reading the file using the File.ReadAllBytes method. MTOM is turned on in configuration, either on the client or the server, via the Messaging tab in the WSE Configuration Tool. Figure 6 shows these MTOM configuration options which, when set, are either written to the client's app.config or the server's web.config files.

Figure 6. MTOM configuration options

There are three server MTOM modes: "optional", "always", and "never".
Always means that the service "always" requires MTOM messages from the client and will "always" return response messages using MTOM.
Never means that MTOM will never be used—and the service will reject MTOM requests.
Optional (the default) means the service will respond in kind to the type of message sent by the client. So if the client sends an MTOM request, it will respond with an MTOM response.
On the client MTOM is either "On" or "Off" (the default).

Let's now look at the client-side code to call this MTOM-enabled service, which returns the GetFileResponse type and then writes the file to disk. Notice that this is no different from calling a ASP.NET Web service in order to return a file, it just happens to optimize the transmission using MTOM.

//Get winter.jpg file as a binary file 
//MTOM is enabled by default for the client in app.config via <mtom clientMode="On" /> element
String fileName = "Winter.jpg";
BinaryDataMTOMServiceWse serviceproxy = new BinaryDataMTOMServiceWse();
localhost.GetFileResponse response = serviceproxy.GetFile(fileName);

Console.WriteLine("File Name: {0}", response.fileName);
Console.WriteLine("Unsecured Bytes Received (at Client): {0}", response.fileData.Length);

File.WriteAllBytes(response.fileName, response.fileData);

You can choose to override the MTOM configuration setting for each client proxy by calling the RequireMtom method on the WSE-generated proxy. For example, the following code turns off MTOM for the proxy

serviceproxy.RequireMtom = false;

So, given all this magic that happens under the covers, how do you know that binary data is being sent on the wire? In order to see this we need to use a TCP trace tool such as TcpTrace.exe, which can be downloaded at http://www.pocketsoap.com/tcptrace/.

If you run this utility (remembering to add port 8080 to the URL either in the client's app.config file or in the client proxy-generated code in the reference.cs file to ensure that the request message goes via the TcpTrace.exe utility), you will then see a message similar to the one shown in Figure 7 below.

Click image to enlarge)

Figure 7. Using MTOM to return binary data from a Web service (click image to enlarge)

The upper portion of the screen shot is the request from the client. The lower portion of the screen shot shows an MTOM-encoded response from the server where the important information is in the <fileData> element, which contains an xop:Include reference to a respective MIME boundary.

    <xop:Include href="cid:1.632536037467392@example.org" />

This MIME boundary with cid:1.63253603746739 in the lower half of the screen shot contains the binary data for the winter.jpg file retrieved from the server.

Streaming binary data with MTOM

This previous code example using MTOM caches the file on the server by first loading it into the FileContents byte[] on the server. For large files with many requests this would take up a large amount of memory on the server. What you want to be able to do is stream the file over the network stream from the server to the client. For ASP.NET Web services hosted in IIS using HTTP this is possible by implementing a return type derived from IXmlSerializable. If you take another look at the code in the BinaryDataMTOMService.cs file in the BinaryDataMTOM QuickStart sample, you will find a another Web service that instead returns a type called GetFileResponseWrapper that is derived from IXmlSerializable. This Web service performs the same function of returning a named file, but this time, by implementing the ReadXml and WriteXml methods on the IXmlSerializable interface, the file is streamed from the server to the network.

Improved Session Management

WSE 2.0 supports secure conversation (WS-SecureConversation), which enables a session to be established between a client and a service, thereby achieving efficient communication for multiple messages. Sessions and secure conversations are synonymous. The article WS-Security Drilldown in Web Services Enhancements 2.0 provides an overview of secure conversation that is still applicable to WSE 3.0. However, in WSE 3.0 sessions can now be established via the policy for the service, such that any given service can also act as a Security Context Token (SCT) issuer, otherwise known as a Security Token Service (STS). If you look back at the example policy file shown earlier, you will see that for any given turnkey security scenario you can indicate that the service can automatically issue SCTs by setting the establishSecurityContext attribute to true as shown in the policy snippet below.

<usernameOverCertificateSecurity establishSecurityContext="true"
  deriveKeys="true" actor="">

In WSE 2.0 auto-SCT issuance was done in the service's web.config file, whilst in WSE 3.0 it has been made more explicit by indicating that this is simply a property of the service itself, when associated with a specific named policy. In addition, the protection used to bootstrap the issuance of the SCT is via the named turnkey security scenario. In the case above, the usernameOverCertificateSecurity assertion is used to protect the RST/RSTR message exchange in order to issue the SCT to the client and to establish the session.

In WSE 3.0 there are a number of additional features that have been added to further improve the management of sessions. First, a session can be renewed via the renewExpiredSecurityContext attribute, as shown above. As with WSE 2.0, if the SCT timeouts, setting this property to true automatically reestablishes a new session with a new SCT. The renewal does not keep the original SCT identifier, but generates a brand new SCT with a new identifier value. The other additional features are session cancellation and stateful sessions, which we will discuss in more detail next.

Session cancellation

In addition to SCTs having a time out, SCTs can now be cancelled explicitly by obtaining the SCT from the client's proxy and calling its Cancel method. This means that the service now knows that the session has finished and that it can clean up its SCT cache. The code below shows how to get the SCT instance from the client's proxy, stored in its SessionState object.

String[] symbols = { "FABRIKAM", "CONTOSO" };
StockQuote[] quotes = serviceProxy.StockQuoteRequest(symbols);

// Success!
Console.WriteLine("Web Service called successfully. Simple view:");
foreach (StockQuote quote in quotes)
    Console.WriteLine("Symbol: " + quote.Symbol);
    Console.WriteLine("\tName:\t\t\t" + quote.Name);
    Console.WriteLine("\tLast Price:\t\t" + quote.Last);
    Console.WriteLine("\tPrevious Change:\t" + quote.PreviousChange + "%");

SecureConversationCorrelationState correlationState =

if (correlationState != null)
    // Get the SCT for the current conversation
    SecurityContextToken sct = correlationState.Token as

   // Cancel the conversation
if (sct != null)


The SCT is cancelled after the call to the service succeeds and this "tears down" the session for both the client and the service.

Stateful sessions

WSE 3.0 now has the ability to create stateful sessions from the client's perspective, otherwise referred to as stateful SCTs. In WSE 2.0 an SCT, represented in the message by the <SecurityContentToken> element, could only contain an <Identifier> element with a value that pointed to the SCT cached both on the server and the client. In other words, a numeric value was passed back and forth from the client to the server that was used to look up the SCT in a hash table at either end. Messages are secured with the SCT as both ends map the identifier value to the cached SCT.

However, there are two scenarios where you need to have more than just the identifier value in the <SecurityContentToken> element:

  1. Reestablishing a session after a service has failed.
  2. Using sessions in Web farms, which are effectively a logical collection of machines.

We will look at each of these scenarios in turn.

Re-establishing Sessions

Services can be unreliable for a number of reasons, such as the environment they are running under due to load or stress conditions or the resetting of app domains due to other resource constraints. If a service goes down and the cached (in memory) SCT is lost, then the session is effectively lost. However, by holding the SCT state on the client and sending this with each request, this enables the session to be reestablished with the server. The SCT state in the message is represented by a <Cookie> element within the <SecurityContentToken> element in addition to the <Identifier> element. For performance the <Identifier> element value is still checked first in the SCT cache. The <Cookie> element contains the session key, lifetime, and the client's identification token (e.g. the Username/Password token). The cookie is encrypted with the server's token so that only the target service is able to decrypt the cookie in the SCT to retrieve this information. The XML message snippet below shows the encrypted cookie's <EncodedData> element within the <SecurityContentToken> element when stateful SCTs are sent in application requests from the client.

    <wssc:Identifier>uuid:9ee5a3c1-97c0-4f42-8ac1-8d819e80cb27 </wssc:Identifier>
    <p:Cookie xmlns:p="http://schemas.microsoft.com/wse/2005/03/StatefulSCT">
      <p:EncodedData>AQAAANCMnd ...(abbreviated)/p:EncodedData>

By default the Windows Data Protection API (DPAPI) is used to encrypt and decrypt the data in the cookie of the stateful SCT, as this is a compact binary format that has little impact on the size of the message. See Windows Data Protection for a description of DPAPI. The DPAPI uses the key associated with the current thread for the logged-on account. Therefore, if the SCT issuer (STS) and target service are running under the same account (which is the recommended approach), then the server's token used to encrypt the SCT (e.g. the server's x509 certificate or Kerberos token) is not needed since this thread's key is used to encrypt the SCT. Although it is not uncommon to have third-party STSs, it is usually preferable to have the service also act as a STS.

To use stateful SCTs where the called service is also the security token service (STS), it is simply necessary to set the enabled attribute to true on the <statefulSecurityContextToken> element in the service's web.config, as shown in the snippet below.

      <statefulSecurityContextToken enabled="true" />

This value can also be set via the WSE Configuration Tool, by selecting the TokenIssuing tab and then selecting the "Enable Stateful Security Context Token" check box as shown in Figure 8 below.

Figure 8. Settings stateful SCTs on a service

The simplest way to experience the power and usefulness of the stateful SCT feature to reestablish a session is to alter the SecureConversationPolicy QuickStart example so that the client proxy calls the Web service twice. Ensure that stateful SCTs are enabled on the service, and then between requests to the service run iisreset.exe from a command prompt, to simulate the service failing. Ensure that you wait for IIS to restart. With stateful SCTs enabled the second call to the service will succeed as if nothing had happened, i.e. the session is reestablished. With stateful SCTs disabled the second call fails since only an identifier is sent in the message.

Note   Session cancellation and stateful SCTs are not really composable. If stateful SCTs are enabled then, although it is possible to cancel a session, the next client request will reestablish it again just as if the service went down and the came back up. Just be aware of how you are trying to manage your session.

Sessions in Web Farms

The second scenario for stateful SCTs is using sessions in Web farms. This excellent article, Managing Security Context Tokens in a Web Farm, describes three solutions to managing state across a Web farm. Stateful SCTs in WSE 3.0 are similar to the solution referred to here as "Put the state information for the SCT into the extensibility area of the SCT" and the text describing this is directly applicable to using stateful SCTs in WSE 3.0. I suggest reading this article to gain an appreciation of how to manage secure context tokens across a farm hosting multiple instances of a service. Also, as discussed earlier, the use of the DPAPI to protect the SCT means that you do not have to configure the target service with the server token in the Web farm, which makes manageability easier.

Simplified Development of Service-Oriented Systems

This section covers the features that enable service-oriented systems to be built more easily using WSE 3.0 and Visual Studio 2005.

Hosting ASMX Web Services Outside of IIS

WSE 2.0 introduced the SoapClient and SoapService classes in order to be able to host Web services outside of IIS and call them with protocols other than HTTP using SOAP messages. SoapClient and SoapService along with the lower-level SoapSender and SoapReceiver classes established WSE as an alternative messaging platform where the fundamental tenets were transport-neutral messaging and alternative hosts. For more details on the WSE 2.0 messaging APIs, read Web Service Messaging with Web Services Enhancements 2.0.

In order to bring the messaging love to ASMX Web services, WSE 3.0 is able to host the ASMX Web services runtime. With the .NET Framework 2.0 it was possible to integrate WSE with the ASMX programming model in order to achieve transport and host independence and continue to take advantage of the tools support in Visual Studio 2005.

In WSE 3.0 ASMX Web services can be hosted in console applications, Windows services, COM+ components, or Windows Form applications and called via the TCP protocol. There have been several other custom transports published for WSE including UDP and MSMQ that could also take advantage of these alternative hosts for ASMX Web services, other than having to use just the TCP protocol.

The code sample below shows how this is achieved in a console application. Taking the StockService class derived from System.Web.Services.WebService it is simply a matter of providing this type to the SoapReceivers collection for a given TCP listening port specified via an EndpointReference class.

class ServiceHost
    static void Main(string[] args)
       // This Web Service is hosted in a console application
        ServiceHost host = null;

            host = new ServiceHost();
            Console.WriteLine("Press any key to exit when done...");
        catch (Exception ex)

    public void Run()
        // The address to start listening for requests
        Uri address = new Uri("soap.tcp://localhost/tcpstockservice");
  // Add the System.Type of the service (StockService) to the
        // SoapReceivers collection.
        SoapReceivers.Add(new EndpointReference(address),
        Console.WriteLine("Listening for messages at address " + address);

From the client, a proxy class generated through Visual Studio's Add Web Reference (if this Web service is also hosted by IIS) can be used to communicate with the console-hosted ASMX Web service simply by updating the proxy's Url property.

In a future release of WSE 3.0 you can use the wsewsdl.exe tool to generate a client proxy derived from Microsoft.Web.Services3.WebServicesClientProtocol via TCP, which can be used to communicate with the ASMX Web service..
Wsewsdl.exe will continue to be able to generate client proxies derived from SoapClient, as can be done with WSE 2.0 today.

The code snippet below shows how the client proxy can be used to call this console-hosted ASMX Web service via TCP.

public void RunTcpClient(StockServiceWse serviceProxy)
    Console.WriteLine("Call ASP.NET Web Service via TCP");
    serviceProxy.Url = "soap.tcp://localhost/tcpstockservice";
    // set the action in the message to indicate which WebMethod to call
    serviceProxy.RequestSoapContext.Addressing.Action = 
new Action("StockQuoteRequest");

    // Call the service
    Console.WriteLine("Calling {0}", serviceProxy.Url);
    String[] symbols = { "FABRIKAM", "CONTOSO" };
    StockQuote[] quotes = serviceProxy.StockQuoteRequest(symbols);

    // Success!
    Console.WriteLine("Web Service called successfully. Simple view:");
    foreach (StockQuote quote in quotes)
        Console.WriteLine("Symbol: " + quote.Symbol);
        Console.WriteLine("\tName:\t\t\t" + quote.Name);
        Console.WriteLine("\tLast Price:\t\t" + quote.Last);
        Console.WriteLine("\tPrevious Change:\t" +quote.PreviousChange + "%");

Integration with Visual Studio 2005

WSE 3.0 is aligned with the release of Visual Studio 2005 and .NET Framework 2.0 and to this extent takes advantage of changes to the underlying platform. WSE 3.0 does not run on .NET Framework 1.1 and Visual Studio 2003. Consequently, the WSE 3.0 documentation describes the migration steps required to move WSE 2.0 projects forward to WSE 3.0, including configuration tool support to automatically upgrade the WSE configuration files to a newer version. On the whole upgrade from WSE 2.0 to WSE 3.0 is a mechanical process with only a limited number of breaking changes.

WSE 3.0 supports 64 bit as per the requirements of .NET Framework 2.0.

Future Proofing and Interoperability

A main objective of the WSE 3.0 release is to provide a path to Indigo in order to build service-oriented applications based upon the Web services protocols. WSE 3.0 is aligned with the Visual Studio 2005 and .NET Framework 2.0 wave and provides security support to ASMX Web services. Indigo is due to be released in the Windows Longhorn timeframe and is the future platform to build distributed, service-oriented applications.

WSE 3.0, when released later this year, is a fully supported product. This support statement is significant. When WSE 1.0 was released it had a limited support lifecycle. Since the WSE 2.0 release, WSE has had extended customer supported in line with .NET Framework. For more information on the WSE product lifecycle see Product Lifecycle Dates - Developer Tools Family.

WSE 3.0 offers the guarantee of wire-level compatibility with Indigo when using the turnkey security scenarios, meaning that you are able to deploy WSE 3.0 services and communicate with these via Indigo clients and vice versa. WSE 3.0 also runs side-by-side with Indigo, meaning that you can deploy services and when Indigo arrives, add further services as necessary, choosing to migrate some as and when necessary, either due to additional features requirements, such as reliability, or improved performance.

Note   WSE 3.0 is wire-level-compatible with Indigo using the HTTP protocol and the corresponding turnkey security scenarios. Interoperability is not guaranteed with other protocols such TCP.

WSE 3.0 is a programming API for building secure Web services, and to that extent all the concepts and experience that you gain building applications with it are applicable to Indigo and other interoperable WS-* toolkits, for that matter. Concepts used to define the contract for a service, messaging patterns, integration with other products such as BizTalk Server for message transformation and orchestration, and SQL Server 2005 for storage, all carry forth to Indigo. In other words, to gain experience in distributed application development, build solutions with WSE today.

The WSE 3.0 release also provides some limited parity with Indigo in the programming model, notably around the alignment with the turnkey security scenarios. The WSE turnkey security assertions are aligned with the Indigo security element binding Authentication modes. As of the time of writing this article, the alignment and guidance between the WSE configuration and policy settings and the Indigo standard bindings is still being worked through and will be the subject of future WSE 3.0-to-Indigo interoperability articles.

Although WSE 3.0 and Indigo run side-by-side, code migration is still a consideration. Migrating from WSE 3.0 to Indigo is intended to be simple and mechanical in the form of documented guidance, and/or scripts and tools. Migration will be simplified where the turnkey scenarios have been used, since architectural decisions around existing infrastructure will have already been made.

WSE 2.0/3.0/Indigo FAQ

This article has covered the WSE 3.0 functionality, but there are support and technology choice questions that are best addressed through a brief FAQ.

What is the WSE 3.0 release schedule?

WSE 3.0 will be released within four weeks of the Visual Studio 2005 release, which is currently scheduled for November 7, 2005. Until then, monthly Community Technology Previews (CTPs) will be made available and feedback and issues should be reported through the Product Feedback Center at http://lab.msdn.microsoft.com/productfeedback/.

Should I use WSE 2.0, WSE 3.0, or Indigo?

WSE 3.0 is a product that extends ASP.NET 2.0 Web services for the .NET Framework 2.0. It is targeted at the Visual Studio 2005 timeframe and once released is a supported and mature implementation of the latest Web services specifications and standards. WSE 3.0 ensures interoperability with Indigo services when using the turnkey security scenarios. Indigo is aimed at the Windows Longhorn timeframe.

Applications targeting the .NET Framework 1.1 should use WSE 2.0; however, WSE 2.0 is not wire-level compatible with WSE 3.0 or Indigo due to changes in a number of specifications such as WS-Addressing.

What is the relationship between WSE 2.0, WSE 3.0, and Indigo?

WSE addresses today's crucial need for developers to easily build secure Web services on the .NET Framework 2.0. In the Longhorn timeframe, Indigo will deliver a unified programming model and runtime for building distributed systems. Indigo provides a superset of today's Microsoft technologies for building distributed systems, including COM+, Enterprise Services, MSMQ, and WSE. WSE 3.0 guarantees to be wire-compatible with Indigo and run side-by-side with Indigo. Think of WSE as the current set of APIs for supporting advanced Web services and enabling service-orientation into your application design today. Think of Indigo as the set of APIs for building distributed systems that support Web services protocols in the "Longhorn" wave.

Does WSE 3.0 have the same object model as Indigo? Will WSE be source-code compatible with Indigo?

No. WSE provides a programming model that follows the Web services specifications, and consequently changes between releases. The Indigo programming model unifies Microsoft's technologies for building distributed systems and incorporates a broader range of technologies and functionality.

Is WSE 2.0 supported on the .NET Framework 2.0?

There will be a supported release of WSE 2.0 on the .NET Framework 2.0 after it has released. Applications built with WSE 2.0 SP2 and SP3 applications will be supported on .NET Framework 2.0; however, WSE 3.0 is the preferred solution for the .NET Framework 2.0. There is no design-time support for WSE 2.0 with Visual Studio 2005 and WSE 2.0 is supported only on 32-bit and not 64-bit. WSE 2.0 SP3 does install with Visual Studio 2005 Beta 2 and the .NET Framework 2.0, but is unsupported.

Can WSE 2.0 applications run side-by-side with WSE 3.0 applications?

Yes. You can install WSE 2.0 and WSE 3.0 on the same machine.

Is WSE 3.0 backwards compatible with the .NET Framework 1.1 and Visual Studio 2003?

No, WSE 3.0 runs only on .NET Framework 2.0.

What is the support policy for WSE 2.0 and 3.0?

WSE 2.0 is aligned with the .NET Framework 1.1, thus providing 5 years of mainstream support and 5 years of extended support (5+5). For WSE 2.0 support, see Product Lifecycle Dates - Developer Tools Family and Microsoft Support Lifecycle.

WSE 3.0 support is aligned with the .NET Framework 2.0.


WSE 2.0 considerably simplified the development and deployment of secure Web services by enabling developers to add message-level security to applications built on the principles of service-orientation and the emerging Web services (WS-*) specifications.

This article has provided an in-depth overview of WSE 3.0, which adds significant new functionality including enabling the ASMX programming model over multiple transports, simplified security policy to enable the turnkey security messaging scenarios, MTOM for sending large amounts of binary data, interoperability with Indigo, and conformance to the latest Web service specifications. Based on the goals of building secure Web services easily, simplified development of service-oriented systems with Visual Studio, and providing a stepping stone to Indigo, the WSE 3.0 release continues to provide a productive, extensible and easy-to-use platform for developing secure Web services today.