Security Guidelines: .NET Framework 2.0

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

patterns & practices Developer Center

J.D. Meier, Alex Mackman, Blaine Wastell, Prashant Bansode, Chaitanya Bijwe

Microsoft Corporation

October 2005

Applies To

  • .NET Framework 2.0


This module presents a set of consolidated .NET Framework 2.0 security guidelines. The guidelines are organized into categories where mistakes are most often made, such as assembly design considerations, class design considerations, strong names, authentication and authorization, exception management, file I/O, and others. Each guideline explains what you should do and why you should do it, and then explains how you can implement the guideline. Where necessary, the guidelines refer you to companion How To modules, which provide detailed step-by-step instructions for more complex implementation procedures. This guideline module has a corresponding checklist that summarizes the security guidelines. For the checklist, see "Security Checklist: .NET Framework 2.0." It also includes an index of security guidelines for .NET Framework 2.0 applications.


How To Use This Module
What's New in 2.0
Index of Guidelines
Assembly Design Considerations
Class Design Considerations
Strong Names
Exception Management
File I/O
Communication Security
Event Log
Data Access
Unmanaged Code
Sensitive Data
Unmanaged Code
Companion Guidance
Additional Resources


In this module, you will learn how to do the following:

  • Design assemblies with knowledge of your target trust environment.
  • Use appropriate class design to reduce attack surface area.
  • Determine when and how to use strong names.
  • Determine the risks introduced with AllowPartiallyTrustedCallersAttribute (APTCA).
  • Develop robust exception management.
  • Address canonicalization issues when your application accepts file names and paths.
  • Protect sensitive data in the registry.
  • Protect sensitive data passed over the network.
  • Avoid sensitive data in the event log.
  • Address data access security.
  • Minimize risk while calling delegates.
  • Avoid serializing sensitive data and do not trust serialized data.
  • Avoid threading vulnerabilities.
  • Minimize risk when your application calls unmanaged code.
  • Protect against reflection attacks.
  • Determine when to use obfuscation.
  • Use cryptographic services to protect sensitive data.

How to Use This Module

To get the most from this module, you should:

  • Use the index to browse the guidelines. Use the index to scan the guidelines and to quickly jump to a specific guideline.
  • Learn the guidelines. Browse the guidelines to learn what to do, why, and how.
  • Use the companion How To modules. Refer to the associated How To modules for more detailed step-by-step implementation details. They describe how to implement the more complex solution elements required to implement a guideline.
  • Use the companion checklist. Use the associated checklist as a quick reference to help you learn and implement the guidelines.

What's New in 2.0

.NET Framework version 2.0 introduces many new security features. The most notable enhancements are:

  • Full trust assemblies now satisfy any code access security demands. In .NET 2.0, any fully trusted assembly will satisfy any demand, including a link demand for an identity permission such as a System.Security.Permissions.StrongNameIdentityPermission that the assembly does not satisfy.
  • Enhanced security exception information. The System.Security.SecurityException object has been enhanced to provide more information in the case of a failed permission.
  • DPAPI managed wrapper. In the .NET Framework version 1.1 you had to use P/Invoke to access the Win32 Data Protection API (DPAPI) functions. In the .NET Framework version 2.0, you no longer need to use P/Invoke. Instead, you can use the new ProtectedData class. ProtectedData contains two static methods: Protect and Unprotect. To use DPAPI to encrypt data in memory, you can use the new ProtectedMemory class. Note that managed code requires the new DataProtectionPermission to be able to use DPAPI.
  • Secure string type. The System.Security.SecureString type uses DPAPI to ensure that secrets stored in string form are not exposed to memory or disk sniffing attacks.
  • XML encryption. The System.Security.Cryptography.Xml.EncryptedXML class can be used to secure sensitive data, such as database connection strings, that must be stored on disk.
  • Programming of ACLs and ACEs. You can now program access control lists (ACLs) and access control entities (ACEs) directly from managed code by using the System.Security.AccessControl namespace.
  • Programmatic Active Directory management. You can now perform management tasks for Microsoft Active Directory® directory service by using the System.DirectoryServices.ActiveDirectory namespace. Data in the directory can be accessed with the System.DirectoryServices namespace
  • Security context for secure asynchronous code. It is now easier to write secure asynchronous code. The System.Security.SecurityContext class allows you to capture the security context of a thread, including code access security markers such as permit and deny and the thread impersonation token, and restore the context on another thread.
  • Secure communication between hosts. .NET Framework version 2.0 provides a set of managed classes in the System.Net.Security namespace to enable secure communication between hosts. This allows you to implement both client and server-side secure channels by using SSPI or SSL. These classes support mutual authentication, data encryption, and data signing.
  • Remoting TCP channel. The System.Runtime.Remoting.Channels.Tcp.TcpChannel class now uses SSPI to support both encryption and authentication. This makes it easier to develop secure remoting without the need for custom code.
  • Remoting IPC channel. The new System.Runtime.Remoting.Channels.Ipc.IpcChannel class is ideal for communication between components on the same physical computer. The underlying implementation uses named pipes that can be secured with ACLs.
  • Simple sandboxing. In .NET Framework version 1.x, to set up a sandboxed application domain (for example, to host untrusted code), you had to create an application domain policy level, create a series of code groups, and define the permission sets that would be granted to each one. In .NET Framework version 2.0, you can use a new overload of the static AppDomain.CreateDomain method to perform all of this work.
  • Global assembly cache means full trust. Assemblies in the global assembly cache are always granted full trust, regardless of the local computer's security policy.
  • Full trust means full trust. Full trust assemblies are never granted any code access security protection.
  • Security transparency. You can now mark assemblies with the System.Security.SecurityTransparentAttribute to let the common language runtime (CLR) know that your code will not perform security-sensitive code access security operations, such as asserting permissions or using stack walk modifiers to escalate privileges. If your code or any code you call attempts such operations, a security exception is generated. This is particularly useful if your code loads third-party plug-ins.
  • Loading code for inspection only. The new Assembly.ReflectionOnlyLoadFrom method allows you to load code purely to examine its members. The loaded code is not allowed to run.
  • OleDB provider supports partial trust. The System.Data.OleDb managed provider no longer requires full trust. Code just requires the OleDbPermission. This allows partial trust developers to access non-SQL databases. For ASP.NET applications, this permission is not granted by medium trust policy, although you can create custom ASP.NET policy files to allow partial trust ASP.NET applications to use OLE DB data sources.
  • SMTP code access security permission. The new System.Net.Mail.SmtpPermission is used to determine which code can send e-mail.

Index of Guidelines

Assembly Design Considerations

Class Design Considerations

Strong Names


Exception Management

File I/O


Communication Security

Event Log

Data Access







Sensitive Data

Unmanaged Code

Assembly Design Considerations

One of the most significant issues to consider at design time is the trust level of your assembly's target environment. This trust level affects the code access security permissions granted to your code and to the code that calls your code. The trust level is determined by code access security policy defined by the administrator, and it affects the types of resources that your code is allowed to access, as well as the other privileged operations it can perform.

When designing your assembly, you should:

  • Identify your target trust environment.
  • Explicitly design your public interface.

Identify Your Target Trust Environment

If you develop partial trust code, you should identify the permissions that will be available to your code, and you should understand which APIs require additional permissions. This is important because partial trust code is unable to access all methods and resources that code running with full trust can access. Typical partial trust scenarios include:

  • An ASP.NET application that runs with medium trust. Examine the Web_MediumTrust.config file in the %windir%\Microsoft.NET\Framework\{version}\CONFIG directory to see the permissions granted to your code.
  • An application downloaded from the Internet. Use the Microsoft .NET Framework Configuration tool or Caspol.exe to see the permissions granted to code running in the Internet zone.
  • An application that runs from a file share. Use the Microsoft .NET Framework Configuration tool or Caspol.exe to see the permissions granted to code running in the intranet zone.

Explicitly Design Your Public Interface

Think carefully about the types and members that form part of your assembly's public interface. Design your interfaces at the beginning of your project, and use a well-designed, minimal public interface. Use friend assemblies to allow other assemblies to access internal and protected members. This is important because it limits your assembly's attack surface by minimizing the number of entry points.

Class Design Considerations

In addition to using a well-defined and minimal public interface, you can further reduce your assembly's attack surface by designing secure classes. Secure classes conform to solid object-oriented design principles, prevent inheritance where it is not required, limit who can call them, and limit which code can call them. The following recommendations can help you to design secure classes:

  • Restrict class and member visibility.
  • Consider using the sealed keyword.
  • Restrict access to your code.
  • Do not trust input.
  • Use properties to expose fields.
  • Use read-only fields appropriately.
  • Use private default constructors to prevent unwanted object instantiation.
  • Make static constructors private.

Restrict Class and Member Visibility

Class and member access modifiers allow you to restrict the callers of your code. The fewer entry points (public interfaces) you have in your code, the smaller your attack surface is and the easier it is to protect.

Use the most restrictive access modifier possible for your code. Use the private access modifier wherever possible. Use the protected access modifier only if the member should be accessible to derived classes. Use the internal access modifier only if the member should be accessible to other classes in the same assembly.

Note   Access modifiers are enforced at compile time only. When malicious code runs in a full trust environment, it could use reflection or unmanaged pointers to bypass these visibility restrictions.

Consider Using the Sealed Keyword

You can use the sealed keyword at the class and method level. In Visual Basic .NET, you can use the NotInheritable keyword at the class level or NotOverridable at the method level. If you do not want anyone to extend your base classes, you should mark them with the sealed keyword.

Before you use the sealed keyword at the class level, you should carefully evaluate your extensibility requirements. It is especially important to seal a class in the following situations:

  • The class contains security secrets, such as passwords, that are accessible through protected APIs.
  • The class contains many virtual members that cannot be sealed, and the type is not designed for third-party extensibility.

You can also seal individual methods and properties within a class. For example, if you derive from a base class that has virtual members and you do not want anyone to extend the functionality of the derived class, you can consider sealing the virtual members in the derived class. Sealing the virtual methods has performance benefits because it makes them candidates for inlining and other compiler optimizations.

Consider the following example.

public class MyClass{ 
  protected virtual void SomeMethod() { ... } 

You can override and seal the method in a derived class, as follows.

public class DerivedClass : MyClass { 
  protected override sealed void SomeMethod () { ... } 

This code ends the chain of virtual overrides and makes DerivedClass.SomeMethod a candidate for inlining.

Note   Class sealing is enforced at compile time only. When malicious code runs in a full trust environment, it could use reflection or unmanaged pointers to bypass this restriction.

Restrict Access to Your Code

Not all methods in a class are meant to be accessed by all code. In some cases, you might need to restrict methods that are are not intended for general public use, but must still be marked as public. For example, you might expose public methods that need to be accessed by specific assemblies developed by your organization, but are not intended to be accessed by others. Consider the following approaches to restricting access to your code:

  • Strong name your assembly, so that partially trusted callers cannot call into it.
  • Restrict callers of your code by demanding specific or custom code access security permissions.

Do Not Trust Input

Do not trust any input to your application. Validate and constrain input by checking it for type, length, format, and range.

An attacker who passes malicious input can attempt SQL injection, cross-site scripting, and other injection attacks that aim to exploit your application's vulnerabilities.

Check for known good data and constrain input by validating it for type, length, format, and range. Check all numeric fields for type and range. You can use regular expressions and the Regex class, and you can validate numeric ranges by converting the input value to an integer or double, and then performing a range check.

Use Properties to Expose Fields

Fields should not be exposed directly to calling code. Properties allow you to add additional constraints, such as input validation or permission demands.

Mark fields as private, and create read/write or read-only properties to access them.

Note   Private fields are enforced at compile time only. When malicious code runs in a full trust environment, it could use reflection or unmanaged pointers to bypass these visibility restrictions.

Use Read-only Properties Appropriately

Do not expose your fields to caller modification unless absolutely necessary. Mark properties read-only unless the caller needs to be able to make a modification. This prevents a caller from accidentally modifying a field.

Note   Read-only properties are enforced at compile time only. When malicious code runs in a full trust environment, it could use reflection or unmanaged pointers to bypass these visibility restrictions.

Use Private Default Constructors to Prevent Unwanted Object Instantiation

A class with a public constructor can be instantiated. Mark the class's constructor private if it is not designed to be instantiated. An example of a class that is not designed for instantiation is a one that contains only static methods and properties.

Note   Private constructors are enforced at compile time only. When malicious code runs in a full trust environment, it could use reflection or unmanaged pointers to bypass these visibility restrictions.

Strong Names

Strong name signatures provide a unique identity for code, and they cannot be spoofed. The identity associated with the signature can be used to make security decisions. For example, you could set code access security policy to trust assemblies signed with your company's strong name when the assemblies come from a server you control. Consider the following guidelines for strong names:

  • Evaluate whether you need strong names.
  • Do not expect strong names to make your assembly tamper proof.
  • Use delay signing appropriately.
  • Use .pfx files to protect your private key if you do not use delay signing.
  • Do not depend on strong name identity permissions in full trust scenarios.

Evaluate Whether You Need Strong Names

From a security point of view, there are no reasons not to use strong names. However, they can make versioning more complicated. For example, if you fix a bug in a strong named assembly and increment its strong name version number, you have to either rebuild everything that depends on that assembly or deploy publisher policy with the assembly.

The following are the most common reasons to sign an assembly with a strong name:

  • You need to add your assembly to the global assembly cache. If you want your assembly to be shared among multiple applications, then you should add it to the global assembly cache. To add your assembly to the global assembly cache, you need to give it a strong name. Adding an assembly to the global assembly cache ensures that your assembly runs with full trust.
  • You want to prevent partial trust callers. The CLR prevents partially trusted code from calling a strong named assembly by adding a link demand for the Full Trust permission set. You can override this behavior by using AllowPartiallyTrustedCallersAttribute (APTCA), although you should do so only if you are fully aware of the issues and after careful code review. For more information, see the section, "APTCA," in this document.
  • You want cryptographically strong evidence for security policy evaluation. Strong names provide cryptographically strong evidence for code access security policy evaluation. This allows administrators to grant permissions to specific assemblies. For example, the public key component of a strong name is often used to represent a particular organization. You could create policy that only allows code from designated organizations to run on your computers.

Do Not Expect Strong Names to Make Your Assembly Tamper Proof

By adding a strong name to an assembly, you ensure that it cannot be modified and still retain your strong name signature. The strong name does not make your assembly tamper proof. It is still possible to remove a strong name, modify the IL code, and then reapply a different strong name.

However, an attacker cannot recreate a valid signature from your original publisher's key unless your publisher's private key has been compromised. Because the key is part of the strong name identity, if an attacker strips a strong name signature, signs the code, and then installs the code in the global assembly cache, the code will have a different identity. Any callers looking for the original assembly will not bind to an assembly signed with a different private key. Strong names prevent this type of substitution attack.

Note   Both Authenticode and strong name signing ensure that if the signed code is tampered with, the signature will be invalidated. However, neither technology prevents an attacker from stripping off the signature, modifying the IL, and signing the code with the attacker's key.

Use Delay Signing Where Appropriate

You should consider using delay signing when there are more than a few developers on your projects and/or you have a daily build process that allows you to easily sign assemblies generated from your daily build. Even if you do not meet either of these conditions, you should consider using delay signing if your private key is extremely sensitive; for example, because you have made trust decisions based on your private key across the thousands of desktops in your enterprise.

Delay signing places the public key in the assembly, which means that it is available as evidence to code access security policy, but the assembly is not signed. From a security perspective, delay signing has two main advantages:

  • The private key used to sign the assembly and create its digital signature is held securely in a central location. The key is accessible to only a few trusted individuals. As a result, the chance of the private key being compromised is significantly reduced.
  • A single public key, which can be used to represent the development organization or publisher of the software, is used by all members of the development team, instead of each developer using his or her own public, private key pair, typically generated by the sn -k command.

Creating a Public Key File for Delay Signing

When you use delay signing, distribute your public key in a .snk file that just contains the public key. Then, use the -keyfile compiler switch while you delay–sign the assemblies, or continue to use the AssemblyKeyFile attribute if you have existing .NET 1.x code that uses this attribute.

The signing authority performs the following procedure to create a public key file that developers can use to delay sign their assemblies.

To create a public key file for delay signing

  1. Create a key pair for your organization.

    sn.exe -k keypair.snk

  2. Extract the public key from the key pair file.

    sn -p keypair.snk publickey.snk

  3. Secure Keypair.snk, which contains both the private and public keys. For example, put it on a compact disc or other hardware device, such as a smart card, and physically secure it.
  4. Make Publickey.snk available to all developers. For example, put it on a network share.

Delay Signing an Assembly

This procedure is performed by developers.

To delay sign an assembly

  1. In Visual Studio .NET 2005, display the project properties.
  2. Click the Signing tab, and select the Sign the assembly and Delay sign only check boxes.
  3. In the Choose a strong name key file: drop-down box, select <Browse …>.
  4. In the file selection dialog box, browse to the public key (.snk) and click OK.
  5. Build your assembly. The complier will build a strong named assembly signed using the public key from the selected key pair (.snk) file.
    Note   A delay signed project will not run and cannot be debugged. You can, however, use the Strong Name tool (Sn.exe) with the -Vr option to skip verification during development.
  6. The delay signing process and the absence of an assembly signature means that the assembly will fail verification at load time. To work around this, use the following commands on development and test computers.
    • To disable verification for a specific assembly, use the following command.

      sn -Vr assembly.dll

    • To disable verification for all assemblies with a particular public key, use the following command.

      sn -Vr *,publickeytoken

    • To extract the public key and key token (a truncated hash of the public key), use the following command.

      sn -Tp assembly.dll

    Note   Use an uppercase -T switch.
  7. To fully complete the signing process and create a digital signature to make the assembly tamper proof, execute the following command. This requires the private key, and as a result the operation is normally performed as part of the formal build/release process. The following command uses key pair contained in the Keypair.snk file to re-sign an assembly called Assembly.dll with a strong name.

    sn -R assembly.dll keypair.snk

Use .pfx Files to Protect Your Private Key If You Do Not Use Delay Signing

If you do not use delay-signing, use password protected .pfx files to protect your private key. Visual Studio .NET 2005 adds support for .pfx files, which makes this very convenient. This approach is more appropriate for small to medium sized projects. If you previously passed around or had access to .snk files that included a private key, you should consider using .pfx files instead.

Do Not Depend on Strong Name Identity Permissions in Full Trust Scenarios

If you protect your code with a link demand for a StrongNameIdentityPermission to restrict the code that can call your code, be aware that this only works for partial trust callers. The link demand will always succeed for full trust callers, regardless of the strong name of the calling code.

In .NET Framework 2.0, any fully trusted assembly will satisfy any demand, including a link demand for an identity permission that the assembly does not satisfy. In .NET Framework 1.0, this did not happen automatically. However, a fully trusted assembly could simply call Assembly.Load, supplying as evidence the strong name it wants to satisfy, or, alternatively, it could turn code access security off.

The only protection against fully trusted code is to put it in a separate process and run that process with a restricted token so that its limits are enforced by the operating system. This applies whether code marks its interfaces as internal or private, or places link demands for StrongNameIdentityPermission on them.

The following code sample shows a method decorated with a link demand for a specific StrongNameIdentityPermission.

public sealed class Utility
  // Although SomeOperation() is a public method, the following 
  // permission demand means that it can only be called by partial trust 
  // assemblies with the specified public key OR by any fully trusted code. 
  public static void SomeOperation() {}


Strongly named, fully trusted assemblies are given an implicit link demand for Full Trust on every public and protected method of every publicly visible class. Any code in your assembly that external code could use as an entry point is protected with this link demand. The CLR enforces this link demand to help prevent luring attacks where untrusted code can lure fully trusted code into doing something dangerous on its behalf.

Note   APTCA removes the implicit link demands only. Any demands explicitly placed in your assembly will still be enforced.
  • Avoid using APTCA.
  • Consider using SecurityTransparent and SecurityCritical.

Avoid Using APTCA

You can override the link demand for full trust by adding the AllowPartiallyTrustedCallersAttribute (APTCA) to your assembly, as shown below, but you should do so only when it is absolutely necessary.

Use of APTCA increases the attack surface and exposes your code to partial trust callers.

Use APTCA only if both of the following conditions apply:

  • You specifically want partially trusted callers to use your strong named assembly. For example, you might need an application running on a file share to access your assembly located in the global assembly cache. However, do not open up an attack surface with APTCA if you only want fully trusted callers to use your code.
  • You have performed a thorough security code review, and your code has been rigorously audited for security vulnerabilities. Examine the resource access and other privileged operations performed by your assembly, and consider authorizing access to these operations by using other code access security demands.

The APTCA attribute is shown below.

 [assembly: AllowPartiallyTrustedCallersAttribute()]

Consider Using SecurityTransparent and SecurityCritical

If you know that your code will not attempt to elevate the permissions of the call stack—for example, by using assertions or stack walk modifiers—consider marking the code with the System.Security.SecurityTransparentAttribute. This is particularly useful if your code calls untrusted code such as a third-party plug-in. If the untrusted code attempts to manipulate the permissions of the call stack, a security exception is generated. Security transparent code and the code it calls give up the right to elevate the permissions of the call stack.

In addition to providing added protection when calling untrusted code, marking code explicitly with the SecurityTransparent and SecurityCritical attributes helps people who have to review your code. Typically, the majority of your code will not elevate code access security permissions, and can therefore be marked as SecurityTransparent. The small amount of your code that actually performs elevations of privilege should be marked as critical. This helps reviewers to focus on those areas of code marked as security critical.

Note   Because a transparent assembly cannot be used by partially trusted code to increase its effective permission set, any assemblies that are marked transparent do not require as thorough of a security audit as standard APTCA assemblies

Marking an assembly SecurityTransparent forces it to abide by the following rules:

  • It cannot assert permissions to stop a stack walk from continuing.
  • It cannot satisfy a link demand. Instead, any link demands on APIs called by the transparent assembly are automatically converted into full demands.
  • It cannot automatically use unverifiable code, even if it has SkipVerification permission. Instead, a full demand for UnmanagedCode occurs.
  • It cannot automatically make calls to P/Invoke methods, even if it has been decorated with the SuppressUnmanagedCodeAttribute. Instead, a full demand for UnmanagedCode occurs.
Note   By default, all assemblies compiled for .NET 1.x and .NET 2.0 are security critical and contain only critical code.

To make an entire assembly transparent, you can explicitly add SecurityTransparent to an assembly by using the following attribute:

 [assembly: SecurityTransparent]

This indicates that the assembly does not contain any critical code, and does not elevate the privileges of the call stack in any way.

If you need to mix critical and transparent code in the same assembly, start by marking the assembly with the System.Security.SecurityCriticalAttribute as shown here.

 [assembly: SecurityCritical]

By marking the assembly with the SecurityCriticalAttribute, you indicate that the assembly can contain critical code. However, unless explicitly marked as critical, all code within the assembly defaults to being transparent. If you want to perform security critical actions, you must explicitly mark the code that will perform the critical action with another SecurityCritical attribute, as shown in the following example.

 [assembly: SecurityCritical]
public class A
    public void Critical()
        // critical

    public int SomeProp
        get {/* transparent */ }
        set {/* transparent */ }
public class B
    internal string SomeOtherProp
        get { /* transparent */ }
        set { /* transparent */ }

All of the code above is transparent (the default setting even with the assembly level SecurityCritical attribute), with the exception of the method Critical, which is explicitly marked as critical.

You can also mark all code within an entire class as critical. To do so, use the SecurityCritical attribute at the class level, and then pass SecurityCriticalScope.Everything, as shown in the following example.

 [assembly: SecurityCritical]

public class MyClass

Exception Management

Do not reveal implementation details about your application in exception messages returned to the client. This information can help malicious users plan attacks on your application. To provide proper exception management:

  • Use structured exception handling.
  • Do not log sensitive data.
  • Do not reveal system or sensitive application information.
  • Consider exception filter issues.
  • Consider an exception management system.
  • Fail early to avoid unnecessary processing that consumes resources.

Use Structured Exception Handling

Use structured exception handling instead of returning error codes from methods because it is easy to forget to check a return code, and, as a result, your code will fail to an insecure mode.

Microsoft Visual C# ® development tool and Microsoft Visual Basic® .NET development system provide structured exception handling constructs. C# provides the try / catch and finally construct. You can protect code by placing it inside try blocks, and implement catch blocks to log and process exceptions. Also, use the finally construct to ensure that critical system resources such as connections are closed, whether an exception condition occurs or not.

Do Not Log Sensitive Data

Avoid logging sensitive or private data such as user passwords. Also, make sure that exception details are not allowed to propagate beyond the application boundary to the client. The rich exception details included in Exception objects are valuable to developers and attackers alike. Log details by writing them in the event log to aid problem diagnosis.

Do Not Reveal System or Sensitive Application Information

Do not reveal too much information to the caller. Exception details can include operating system and .NET Framework version numbers, method names, computer names, SQL command statements, connection strings, and other details that are very useful to attackers. Write detailed error messages in the event log, and return generic error messages to the user.

Consider Exception Filter Issues

If your code fails to catch exceptions and your code uses impersonation, a malicious user could use exception filters to execute code that runs under the impersonated security context, even if you are reverting the impersonation in your finally block. This is particularly serious if your code impersonates a privileged account. If your code does not catch the exception, exception filters higher in the call stack can be executed before code in your finally block is executed.

If you use programmatic impersonation, use structured exception handling and put the impersonation code inside try blocks. Use a catch block to handle exceptions and to prevent exceptions propagating. Use a finally block to ensure that the impersonation is reverted, as shown in the following example.

using System.Security.Principal;
. . .
WindowsIdentity winIdentity = new WindowsIdentity("username@domainName");
WindowsImpersonationContext ctx = null;

  ctx = winIdentity.Impersonate();
  // Do work.
// Do not let the exception propagate. Catch it here.
catch(Exception ex)
  // Stop impersonating.

By using a finally block, you make ensure that the impersonation token is removed from the current thread, even if an exception is generated. By preventing the exception from propagating from the catch block, you make sure that exception filter code higher in the call stack does not execute while the thread still has an impersonation token attatched to it.

Note   Exception filters are supported by Microsoft Intermediate Language (MSIL) and Visual Basic .NET.

Consider an Exception Management System

Consider using a formal exception management system for your application because this can help improve system supportability and maintainability and ensure that you detect, log, and process exceptions in a consistent manner.

For information about how to create an exception management framework and about best practices for exception management in .NET applications, see:

Fail Early to Avoid Unnecessary Processing that Consumes Resources

Check that your code fails early to avoid unnecessary processing that consumes resources. If your code does fail, check that the resulting error does not allow a user to bypass security checks to run privileged code.

File I/O

Canonicalization issues are a major concern for code that accesses the file system. If you have the choice, do not base security decisions on input file names because of the many ways that a single file name can be represented. If your code needs to access a file using a user-supplied file name, take steps to make sure that a malicious user cannot use your assembly to gain access to or overwrite sensitive data.

The following recommendations help you improve the security of your file I/O:

  • Avoid untrusted input for file names and file paths.
  • If you accept file names, validate them.
  • Use absolute file paths where you can.
  • Consider constraining file I/O within your application's context.

Avoid Untrusted Input for File Names and File Paths

Avoid writing code that accepts file or path input from the caller. Instead, use fixed file names and locations when your code reads and writes data. This ensures that your code cannot be coerced into accessing arbitrary files. Also avoid making security decisions based on user-supplied filenames.

If You Accept File Names, Validate Them

If you do need to receive input file names from the caller, make sure that the file names are strictly formed so that you can determine whether they are valid. There are two aspects to validating input file paths. You need to:

  • Check for valid file system names.
  • Check for a valid location as defined by your application's context. For example, are the file names within the directory hierarchy of your application?

To validate a path and file name, use the System.IO.Path.GetFullPath method as shown in the following code example. This method also canonicalizes the supplied file name.

using System.IO;
public static string ReadFile(string filename)
  // Obtain a canonicalized and valid filename
  string name = Path.GetFullPath(filename);
  // Now read the file and return the file content.

As part of the canonicalization process, GetFullPath performs the following checks:

  • It checks that the file name does not contain any invalid characters, as defined by Path.InvalidPathChars.
  • It checks that the file name represents a file and not another device type, such as a physical drive, a named pipe, a mail slot, or a DOS device such as LPT1, COM1, AUX, and other devices.
  • It checks that the combined path and file name is not too long.
  • It removes redundant characters such as trailing dots.
  • It rejects file names that use the //?/ format.

Use Absolute File Paths Where You Can

Try to use absolute file paths where you can. Do not trust an environment variable to construct a file path because you cannot guarantee the value of the environment variable.

Consider Constraining File I/O within your Application's Context

After you know that you have a valid file system file name, you might need to check that it is valid in your application's context. For example, you may need to check that it is within the directory hierarchy of your application and that your code cannot access arbitrary files on the file system.

You can use a restricted FileIOPermission to constrain an assembly's ability to perform file I/O; for example, by specifying allowed access rights (read, read/write, and so on) or limiting access to specific directories.

You can use declarative attributes together with SecurityAction.PermitOnly to constrain file I/O, as shown in the following example.

// Allow only this code to read files from c:\YourAppDir
[FileIOPermission(SecurityAction.PermitOnly, Read=@"c:\YourAppDir\")]
[FileIOPermission(SecurityAction.PermitOnly, PathDiscovery=@"c:\YourAppDir\")]
public static string ReadFile(string filename)
  // Use Path.GetFullPath() to canonicalize the file name
  // Use FileStream.OpenRead to open the file
  // Use FileStream.Read to access and return the data
Note   The second attribute that specifies PathDiscovery access is required by the Path.GetFullPath function that is used to canonicalize the input file name.

To avoid hard coding your application's directory hierarchy, you can use imperative security syntax, and then use the HttpContext.Current.Request.MapPath(".")to retrieve your Web application's directory at run time. You must refer to the System.Web assembly and add the corresponding using statement, as shown in the following example.

using System.Web;
using System.Security.Permissions;
public static string ReadFile(string filename)
  string appDir = HttpContext.Current.Request.MapPath(".");
  FileIOPermission f = new FileIOPermission(PermissionState.None);
  f.SetPathList(FileIOPermissionAccess.Read, appDir);
  f.SetPathList(FileIOPermissionAccess.PathDiscovery, appDir);

  // Use Path.GetFullPath() to canonicalize the file name
  // Use FileStream.OpenRead to open the file
  // Use FileStream.Read to access and return the data

Partial trust ASP.NET Web applications that run with medium trust use code access security to restrict the directories that your application can access. Medium trust policy permits your application to access the directories beneath your application's virtual root directory. For more information about running ASP.NET applications at medium trust, see "How To: Use Medium Trust in ASP.NET 2.0" at


In some scenarios, the registry can provide a suitable location for storing sensitive application configuration data. You can store configuration data under the single, local machine key (HKEY_LOCAL_MACHINE) or under the current user key (HKEY_CURRENT_USER). If you store sensitive data in the registry, consider restricting access to that data and consider encrypting it prior to storage.

Consider the following guidelines when using the registry.

  • Consider using ACLs to restrict access to data stored in HKLM.
  • Consider encrypting sensitive data in the registry.

Consider Using ACLs to Restrict Access to Data Stored in HKLM

If you store sensitive data in the HKEY_LOCAL_MACHINE section of the registry, consider applying an ACL to the registry key that restricts access to a specific account, such as the account under which your application runs. This is important because any process running on your computer has access to the HKEY_LOCAL_MACHINE section of the registry. By isolating your application and running it under a dedicated custom account, only your application can access the data from the ACL-protected key.

Note   Use of HKEY_LOCAL_MACHINE makes it easier to store configuration data at installation time and maintain it later on.

If your security requirements dictate an even less accessible storage solution, use a key under HKEY_CURRENT_USER. This approach means that you do not have to explicitly configure ACLs because access to the current user key is automatically restricted based on process identity.

Note   HKEY_CURRENT_USER allows more restrictive access because a process can access the current user key only if the user profile associated with the current thread or process token is loaded.

Consider Encrypting Sensitive Data in the Registry

If you need to store sensitive data in the registry, then consider encrypting it with DPAPI. You can use DPAPI with the machine key to encrypt the data, store the encrypted data beneath a registry key, and then use an ACL that restricts access to your specific application identity to restrict access to the registry key. Alternatively, you can use DPAPI with the user store. In this latter case, you need to load the user account's profile to access the key.

Consider using the machine store if your application is a server that runs on its own dedicated computer with no other applications, or if you have multiple applications on the same server and you need those applications to be able to share the sensitive registry data. If you only want specific service accounts to be able to access the DPAPI keys and their profiles are loaded, then use DPAPI with the user store.

The following code shows how to create a registry key protected with an ACL and how to use DPAPI with the machine store to store encrypted data in the restricted key.

using System.Security.Cryptography;
using System.Security.AccessControl;
using System.Text;
using Microsoft.Win32;

// Get the original data in a byte array
byte[] toEncrypt = UnicodeEncoding.ASCII.GetBytes(
                   "The secret data to be encrypted");

// Encrypt the data by using the ProtectedData class.
byte[] encryptedData = ProtectedData.Protect(toEncrypt, null,

// Create a new key in the registry with a restricted ACL 
// and write stream of bytes to the registry key
string user = Environment.UserDomainName + "\\" + Environment.UserName;
RegistrySecurity security = new RegistrySecurity();
RegistryAccessRule rule = new RegistryAccessRule(user,

Registry.SetValue(@"HKEY_CURRENT_USER\TestEncryptedData", "Encrypted",

Use the following code to decrypt the data stored in the registry.

// Read the encrypted data from registry and decrypt the contents 
byte[] dataFromRegistry = Registry.GetValue(
                 "Encrypted", null) as byte[];
byte[] decryptedData = ProtectedData.Unprotect(dataFromRegistry, null,

Communication Security

If your application passes sensitive data over networks, consider the threats of eavesdropping, tampering, and unauthorized callers accessing your end point. .NET Framework 2.0 provides a set of managed classes in the System.Net.Security namespace to enable secure communication between hosts when you are using remoting or raw-sockets based communication. This allows you to implement both client and server-side secure channels using SSPI or SSL. These classes support mutual authentication, data encryption, and data signing.

If you use remoting or sockets, consider the following guidelines:

  • Consider transport level encryption to protect secrets on the network.
  • If you are using the TCP channel with .NET remoting, consider System.Net.Security.NegotiateStream.

Consider Transport Level Encryption to Protect Secrets on the Network

If your servers are not inside a physically secure data center where the network eavesdropping threat is considered insignificant, you need to use an encrypted communication channel to protect data sent between servers. You can use SSL or IPSec to encrypt traffic and help protect communication between servers. Use SSL when you need granular channel protection for a particular application, instead of protection for all applications and services running on a computer. Use IPSec to help protect the communication channel between two servers and to restrict the computers that can communicate with each other. For example, you can help protect a database server by establishing a policy that permits requests only from a trusted client computer, such as an application or Web server. You can also restrict communication to specific IP protocols and TCP/UDP ports.

If You Use the TCP Channel with .NET Remoting, Consider System.Net.Security.NegotiateStream

In .NET Framework 1.1, remoting applications that use the TCP channel do not by default perform authentication or encryption. In .NET Framework 2.0, the remoting framework uses the new System.Net.Security.NegotiateStream class to encrypt and sign the data transported over the channel and to authenticate callers. To use this feature, you can configure the <channel> element in the Machine.config file, the Web.config file, or the App.config file, depending on whether you want to apply the setting across all applications on your computer or to a specific application.

The following example shows how a server specifies that authentication is required and that the channel should be protected with encryption.

<channel ref="tcp" port="1234" 
         authenticationMode="IdentifyCallers" secure="true" />

To authenticate clients by using their domain credentials, you need to set the useDefaultCredentials attribute of the <channel> in the client configuration to true. The following example shows a sample client configuration.

<channel ref="tcp" useDefaultCredentials="true" secure="true" 
         impersonationLevel="Identify" />

To use Kerberos authentication, the client must specify a service principal name (SPN). This can be done programmatically or in the client's configuration file, as shown in the following example.

<channel ref="tcp" 
         spn="someService/" />
Note   Use of .NET remoting is not encouraged for interprocess or server-to-server communication. .NET remoting is for cross-application domain communication within a process.

Event Log

When you write event logging code, consider the threats of tampering and information disclosure. For example, can an attacker retrieve sensitive data by accessing the event logs? Can an attacker cover tracks by deleting the logs or erasing particular records?

Windows security restricts direct access to the event logs using system administration tools, such as the Event Viewer. Your main concern should be to ensure that the event logging code you write cannot be used by a malicious user for unauthorized access to the event log.

Consider the following guidelines:

  • Do not log sensitive data.
  • Do not expose event log data to unauthorized users.

Do Not Log Sensitive Data

Do not log sensitive user information, such as credentials, credit card numbers, or user IDs. When the information has been sent to the log, it can be viewed by anyone with access to the event log. To prevent the disclosure of sensitive data, do not log it in the first place. The event log is a useful location to store application execution information and error information.

Do Not Expose Event Log Data to Unauthorized Users

Direct access to the event log through tools such as the Event Viewer is restricted to administrators. Do not expose event log data to less privileged users because the log may contain information about application or system internals that could be useful to an attacker.

Data Access

Two of the most important factors to consider when your code accesses a database are how to manage database connection strings securely and how to construct SQL statements and validate input to prevent SQL injection attacks. Also, when you write data access code, consider the permission requirements of your chosen ADO.NET data provider.

  • Do not hard code connection strings.
  • Consider encrypting connection strings.
  • Prevent SQL injection.

Do Not Hard Code Connection Strings

Do not hard code connection string in your assembly. An attacker with access to your application can extract this information directly from the assembly. An attacker can use a decompiler to reconstitute your code, and make discovery of this information even easier.

Store connection strings externally, for example in configuration files.

Consider Encrypting Connection Strings

Sensitive data items such as connection string stored in configuration files should be encrypted. Encrypting connection strings is particularly important if they contain user credentials; for example, connection strings used with SQL authentication.

In ASP.NET 2.0, store connection strings in the <connectionStrings> section of Web.config file, and use the Aspnet_regiis tool to encrypt this section. This tool uses one of the protected configuration providers that support DPAPI or RSA encryption.

For more information, see the following documents:

Prevent SQL Injection

To help prevent SQL injection, you should validate input and use parameterized stored procedures for data access. The use of parameters (for example, SqlParameterCollection) ensures that input values are checked for type and length and values outside the range throw an exception. Parameters are also treated as safe literal values and not as executable code within the database. The following code shows how to use SqlParameterCollection when calling a stored procedure.

using System.Data;
using System.Data.SqlClient;

using (SqlConnection connection = new SqlConnection(connectionString))
  DataSet userDataset = new DataSet();
  SqlDataAdapter myCommand = new SqlDataAdapter( 
             "LoginStoredProcedure", connection);
  myCommand.SelectCommand.CommandType = CommandType.StoredProcedure;
  myCommand.SelectCommand.Parameters.Add("@au_id", SqlDbType.VarChar, 11);
  myCommand.SelectCommand.Parameters["@au_id"].Value = SSN.Text;


Avoid stored procedures that accept a single parameter as an executable query. Instead, pass query parameters only.

Use structured exception handling to catch errors that occur during database access, and prevent them from being returned to the client. A detailed error message may reveal valuable information such as the connection string, SQL server name, or table and database naming conventions. Attackers can use this information to construct more precise attacks.

As an additional precaution, use a least privileged account to access the database, so that even if your application is compromised, the impact will be reduced.

For more information, see "How To: Protect From SQL Injection in ASP.NET" at


Delegates are the managed equivalent of type-safe function pointers. The .NET Framework uses them to support events. The delegate object maintains a reference to a method, which is called when the delegate is invoked. Events allow multiple methods to be registered as event handlers. When the event occurs, all event handlers are called.

  • Avoid accepting delegates from untrusted sources
  • Consider restricting permissions to the delegate.
  • Avoid asserting permissions before calling a delegate.

Avoid Accepting Delegates from Untrusted Sources

If your assembly exposes a delegate or an event, be aware that any code can associate a method with the delegate, and you have no advance knowledge of what the code will do. The safest policy is not to accept delegates from untrusted callers. If your assembly is strong named and does not include the AllowPartiallyTrustedCallersAttribute, only full trust callers can pass a delegate to your code.

Consider Restricting Permissions to the Delegate

If you allow partially trusted callers, you should consider restricting permissions to the delegate. You can either use an appropriate permission demand to authorize the external code when it passes the delegate to your code, or you can use a deny or permit-only stack modifier to restrict the delegate's permissions just prior to calling it. For example, the following code grants the delegate code only execution permission to constrain its capabilities.

using System.Security;
using System.Security.Permissions;

// Delegate definition
public delegate void SomeDelegate(string text);
public void ExecDelegateWithExcePerm()

        // Permit only execution, prior to calling the delegate. This prevents the
        // delegate code accessing resources or performing other privileged
        // operations
        new SecurityPermission(SecurityPermissionFlag.Execution).PermitOnly();

        // Now call the "constrained" delegate
        SomeDelegate del = new SomeDelegate(DisplayResults);

        // Revert the permit only stack modifier

private void DisplayResults(string result)

Avoid Asserting Permissions Before Calling a Delegate

Asserting a permission before calling a delegate is dangerous because you have no knowledge about the nature or trust level of the code that will be executed when you invoke the delegate. The code that passes you the delegate is on the call stack and can therefore be checked with an appropriate security demand. However, there is no way of knowing the trust level or permissions granted to the delegate code itself.


You may need to add serialization support to a class if you need to be able to marshal it by value across a .NET remoting boundary (that is, across application domains, processes, or computers) or if you want to be able to persist the object state to create a flat data stream, perhaps for storage on the file system.

By default, classes cannot be serialized. A class can be serialized if it is marked with the SerializableAttribute or if it derives from ISerializable. If you use serialization:

  • Do not serialize sensitive data.
  • Validate serialized data streams.

Do Not Serialize Sensitive Data

If you must serialize your class and it contains sensitive data, avoid serializing the fields that contain the sensitive data. Either implement ISerializable to control the serialization behavior or decorate fields that contain sensitive data with the [NonSerialized] attribute. By default, all private and public fields are serialized. This is important because serialization places the data in memory, often in preparation for sending it over a network, making it easier for an attacker to gain access to it.

The following example shows how to use the [NonSerialized] attribute to ensure that a specific field which contains sensitive data cannot be serialized.

public class Employee {
  // OK for name to be serialized
  private string name;
  // Prevent salary being serialized
  [NonSerialized] private double annualSalary;
  . . .

Alternatively, implement the ISerializable interface and explicitly control the serialization process. If you must serialize the sensitive item or items of data, consider encrypting the data first. The code that de-serializes your object must have access to the decryption key.

Validate Serialized Data Streams

Serialized data should not be considered trusted data. Subject it to the same level of scrutiny that you would subject any other untrusted file, network, or user input. To avoid potentially damaging data being injected into the object, validate each field as it is reconstituted as shown in the following example.

public void DeserializationMethod(SerializationInfo info, StreamingContext cntx)
  string someData = info.GetString("someName");
  // Use input validation techniques to validate this data.


Bugs caused by race conditions in multithreaded code can result in security vulnerabilities and generally unstable code that is subject to timing-related bugs. If you develop multithreaded assemblies, consider the following guidelines:

  • Do not cache the results of security checks.
  • Avoid losing impersonation tokens.
  • Synchronize static class constructors.
  • Synchronize Dispose methods.

Do Not Cache the Results of Security Checks

If your multithreaded code caches the results of a security check, perhaps in a static variable, the code is potentially vulnerable, as shown in the following example.

public void AccessSecureResource()
  callerOK = PerformSecurityDemand();
  callerOK = false;
private void OpenAndWorkWithResource()
  if (callerOK)

If your code has other paths to OpenAndWorkWithResource, and a separate thread calls the method on the same object, it is possible for the second thread to omit the security demand because it encounters callerOK=true, set by another thread.

Avoid Losing Impersonation Tokens

In .NET Framework 1.1, impersonation tokens did not automatically flow to newly created threads. This situation could lead to security vulnerabilities because new threads assume the security context of the process. In .NET Framework 2.0, by default the impersonation token still does not flow across threads, but for ASP.NET applications you can change this default behavior by configuring the ASPNET.config file in the %Windir%Microsoft.NET\Framework\{Version} directory.

If you need to flow the impersonation token to new threads, set the enabled attribute to true on the alwaysFlowImpersonationPolicy element in the ASPNET.config file, as shown in the following example.

    <alwaysFlowImpersonationPolicy enabled="true"/>

If you need to prevent impersonation tokens from being passed to new threads programmatically, you can use the ExecutionContext.SuppressFlow method.

Synchronize Static Class Constructors

If you use static class constructors, make sure that they are not vulnerable to race conditions. If, for example, they manipulate static state, add thread synchronization to avoid potential vulnerabilities.

Synchronize Dispose Methods

If you develop non-synchronized Dispose implementations, the Dispose code could be called more than once on separate threads. The following code shows an example of this.

void Dispose()
  if (null != theObject)
    theObject = null;

In this example, it is possible for two threads to execute the code before the first thread has set theObject reference to null. Depending on the functionality provided by the ReleaseResources method, security vulnerabilities could occur.


.NET reflection is a feature that allows running code to dynamically discover, load, and generate assemblies. With reflection, you can enumerate an assembly's types, methods, and properties—including those marked as private. By using Reflection.Emit, you can generate a new assembly dynamically at run time and invoke its members. Reflection is a powerful feature that has a number of security implications:

  • If your code has the ReflectionPermission, it can enumerate and invoke non-public members or types in other assemblies. If you do not use code access security permission demands to authorize calling code, an attacker could gain access to code that would otherwise be inaccessible.
  • By using reflection, you can dynamically load assemblies—for example, by using System.Reflection.Assembly.Load. If you allow untrusted code or data to influence which assembly is loaded, an attacker could trick your code into loading and executing malicious code.
  • By using reflection, you can dynamically invoke methods on assemblies—for example, by using System.Reflection.MethodInfo.Invoke. If you allow untrusted code or data to influence the method invocation, an attacker could trick your code into making unexpected and potentially malicious method calls.
  • With Reflection.Emit, your code can dynamically generate and execute code at run time. If you allow untrusted code or data to influence the code generation, an attacker could coerce your application into generating malicious code.

When you use reflection, consider the following guidelines:

  • Use full assembly names when you dynamically load assemblies.
  • Avoid letting untrusted code or data control run-time assembly load decisions.
  • Avoid letting untrusted code or data control Reflection.Emit.
  • Consider restricting the permissions of dynamically generated assemblies.
  • Only persist dynamically created assemblies if necessary.
  • Use ReflectionOnlyLoadFrom if you only need to inspect code.

Use Full Assembly Names When You Dynamically Load Assemblies

If your code supports the dynamic loading of assemblies and you load the assembly by calling Activator.CreateInstance, make sure to refer to the assembly by using its strong name. This prevents your application from accidentally loading a malicious assembly with the same name as a legitimate assembly. The strong name of an assembly contains the public-key token that the assembly was signed with, providing evidence of the author.

The following example shows how to find the strong name for an assembly.

public static StrongName GetStrongName(Assembly assembly)
    if(assembly == null)
        throw new ArgumentNullException("assembly");

    AssemblyName assemblyName = assembly.GetName();
    // get the public key blob
    byte[] publicKey = assemblyName.GetPublicKey();
    if(publicKey == null || publicKey.Length == 0)
        throw new InvalidOperationException(
            String.Format("{0} is not strongly named", 
    StrongNamePublicKeyBlob keyBlob = 
        new StrongNamePublicKeyBlob(publicKey);

    // create the StrongName
    return new StrongName(
        keyBlob, assemblyName.Name, assemblyName.Version);

Avoid Letting Untrusted Code or Data Control Run-time Assembly Load Decisions

Avoid letting the user or untrusted code directly control which assemblies or types your code loads. Avoid using user input to derive assembly or type names. These precautions prevent your application from loading a malicious assembly or blindly invoking a method that could be used for malicious purposes.

Avoid Letting Untrusted Code or Data Control Reflection.Emit

If your application dynamically generates code through the use of Reflection.Emit, do not allow untrusted code or data to influence the code generation. If an attacker can influence the code generation, the attacker could coerce your application into generating malicious code. This is particularly significant if your code uses user-supplied input to generate code dynamically.

There are some scenarios, such as script engine implementation, in which it is necessary to allow untrusted input to drive Reflection.Emit. If your assembly dynamically generates code to perform operations for a caller and the caller operates at a lower trust level, be especially vigilant for security vulnerabilities. Validate any input string used as a string literal in your generated code and escape quotation mark characters to make sure that the caller cannot break out of the literal and inject code. If there is a way that the caller can influence the code generation so that it fails to compile, treat the problem as a potential security vulnerability.

Consider Restricting the Permissions of Dynamically Generated Assemblies

If you must use user input to help dynamically generate assemblies, you can restrict the permissions available to dynamically created assemblies by using the overload of the AppDomain.DefineDynamicAssembly method, which accepts three permission sets and evidence, as shown in the following example.

public sealed AssemblyBuilder DefineDynamicAssembly(
            AssemblyName name, 
            AssemblyBuilderAccess access, 
            Evidence evidence, 
            PermissionSet requiredPermissions, 
            PermissionSet optionalPermissions, 
            PermissionSet refusedPermissions

This overload allows you to pass required, optional, and refused permission sets to apply specific security policy to the dynamically created assembly. Passing the evidence forces the CLR to evaluate the permission set for dynamic created code.

Note   If you use an overload that does not accept evidence, then the CLR does not evaluate the permission set for the created dynamic assembly. Instead, the permission set granted to the generated code is inherited from the permission set of the assembly that is emitting it.

Only Persist Dynamically Created Assemblies If Necessary

Where possible, avoid persisting dynamically generated assemblies created with Reflection.Emit. To avoid the risk of the code being called by code external to your application, keep the generated assembly memory resident only and do not persist it.

Use ReflectionOnlyLoadFrom If You Only Need to Inspect Code

If you need to load untrusted assemblies to inspect members but not to run code, use the Assembly.ReflectionOnlyLoadFrom method to load the assembly. Reflection-only loading—also known as introspection—allows you to load an assembly to inspect the code's members. This method reduces your attack surface because it does not allow any code from the loaded assembly to run.


If you are concerned about protecting intellectual property, you can use an obfuscation tool to make it extremely difficult for a disassembler to be used on the MSIL code of your assemblies. An obfuscation tool confuses human interpretation of the MSIL instructions and helps prevent successful disassembly.

Obfuscation is not foolproof, and you should not build security solutions that rely on it. However, obfuscation does address threats that occur because of the possibility that an attacker can reverse engineer code. Obfuscation tools generally provide the following benefits:

  • They help protect your intellectual property.
  • They obscure code paths. This makes it harder for an attacker to crack security logic.
  • They mangle the names of internal member variables. This makes it harder to understand the code.
  • They encrypt strings. Attackers often attempt to search for specific strings to locate important sensitive logic. String encryption makes this much harder to do.

A number of third-party obfuscation tools exist for the .NET Framework. One tool, the Community Edition of the Dotfuscator tool by PreEmptive Solutions, is included with the Microsoft Visual Studio® .NET 2005 development system. It is also available from For more information, see the list of obfuscator tools listed at

When your goal is to protect intellectual property, remember the following:

  • Avoid storing secrets in code.
  • Consider using obfuscation to make intellectual property theft more difficult.

Avoid Storing Secrets in Code

Do not hard code sensitive information, such as connection strings, user credentials, and encryption keys. An attacker who has access to your assembly can access the sensitive information by examining the MSIL, by using a disassembler, or by using reflection.

If you must store secrets in code, use a strong obfuscator to make sure that the class or member of the class that stores the hard coded secret is obfuscated.

Consider Using Obfuscation to Make Intellectual Property Theft More Difficult

Assemblies can be reverse engineered easily. This enables people to understand your program logic and how it has been implemented. If you are concerned about protecting your intellectual property, use obfuscation to make it much more difficult for anyone to reverse engineer your assembly and understand the program logic.

Use obfuscation tools, such as Dotfuscator Community Edition available with Visual Studio 2005. Do not rely on obfuscation for security, but use it to make it more difficult for anyone to access secrets stored in code or to reverse engineer your code.


Cryptography is one of the most important tools that you can use to protect data. You can use encryption to provide data privacy. You can use hash algorithms, which produce a fixed and condensed representation of data, to make data tamperproof. Also, you can use digital signatures for authentication purposes.

You should use encryption when you want data to be secure in transit or in storage. Some encryption algorithms perform better than others, and some provide stronger encryption. Typically, larger encryption key sizes increase security.

Two of the most common mistakes made when using cryptography are developing your own encryption algorithms and failing to secure your encryption keys. Encryption keys must be handled with care. An attacker armed with your encryption key can gain access to your encrypted data. When you use encryption:

  • Use platform-provided cryptographic services.
  • Use appropriately sized keys.
  • Use GenerateKey to generate random keys.
  • Consider using DPAPI to avoid key management.
  • Do not store keys in code.
  • Use PasswordDeriveBytes for password-based encryption.
  • Restrict access to persisted keys.
  • Cycle keys periodically.
  • Protect exported private keys.

Use Platform-Provided Cryptographic Services

Cryptography is notoriously difficult to develop. The Windows crypto APIs are proven to be effective. These APIs are implementations of algorithms derived from years of academic research and study. Some developers believe that a less well-known algorithm can provide more security, but this is not true. Cryptographic algorithms are mathematically proven; therefore, the more scrutiny they receive, the better. An obscure algorithm will not protect a flawed cryptographic implementation from a determined attacker.

  • For hashing, use SHA1. For integrity checking, use HMACSHA1 or a digital signature mechanism
  • Consider using the XMLEncryption mechanisms when you need to encrypt different parts of a document under different keys or if you only want to encrypt small sections of a document.
  • Use X509 and SMIME encryption if you are using an internal or external public key infrastructure (PKI) based on digital certificates.

Use Appropriately Sized Keys

Choosing a key size represents a trade-off between performance and security. If you choose a key that is too small, the data that you thought was well protected can be vulnerable to attack. If you choose a key that is too large, your system will be subject to a performance impact without a commensurate real-world improvement in security. The appropriate key size changes based on the cryptographic algorithm in use, and also changes over time as machine processing speeds increase and attack techniques become more sophisticated. The following recommendations will give you a margin of safety without sacrificing too much performance:

  • When you use an asymmetric algorithm (RSA), choose a 2048-bit key
  • When you use a symmetric algorithm (AES), choose a 128-bit key

Use GenerateKey to Generate Random Keys

When you use a default constructor to create a new instance of a managed symmetric cryptographic class, a new key and initialization vector are automatically created. You should call the GenerateKey method on the symmetric algorithm instance. GenerateKey creates a random strong key and sets it to the algorithm.

When you use symmetric algorithms, creating and managing keys is an important part of cryptographic process. If you use weak keys, you increase the likelihood that an attacker can compromise the key and access your encrypted data. The following example shows how to use the GenerateKey method.

using System.Security.Cryptography;
// create instance of AES encryption algorithm and set the key value
RijndaelManaged aesEncryptionAlgorithm = new RijndaelManaged();

//Generate IV and Key for the instance

// Retrieve the IV and Key and encrypt and safeguard it.

Consider Using DPAPI to Avoid Key Management

By using DPAPI to encrypt sensitive data—either in memory or in persistent stores, such as configuration files or the registry—you avoid having to manage and protect the encryption key. With DPAPI, the operating system manages and protects the key.

For sensitive data stored in ASP.NET Web.config files, you can use the Aspnet_regiis tool and the data protection feature provided with ASP.NET 2.0. For more information see, "How To: Encrypt Configuration Section in ASP.NET 2.0 using DPAPI" at"

Note   DPAPI encryption is not recommended for use in Web farm scenarios because of machine affinity. Instead, you should use RSA encryption, which is also supported by the Aspnet_regiis tool.

For sensitive data stored in memory, you can use the ProtectedMemory class which provides a managed wrapper for DPAPI to encrypt the data. For text-based data, consider using SecureString instead. SecureString uses the ProtectedMemory class to encrypt text in memory.

The following example shows how to use the ProtectedMemory class for encrypting and decrypting data in memory.

using System.Security.Cryptography;
using System.Text;

byte[] optionalEntropy = {7,5,4,9,0};
byte[] dataToBeEncrypted = Encoding.Unicode.GetBytes("Test String 1211");
ProtectedMemory.Protect(dataToBeEncrypted, MemoryProtectionScope.SameLogon);

ProtectedMemory.Unprotect(dataToBeEncrypted, MemoryProtectionScope.SameLogon);
string originalData = Encoding.Unicode.GetString(dataToBeEncrypted);

For sensitive data stored in any other data store, use the ProtectedData class as shown in the following example.

using System.Security.Cryptography;
using System.Text;

byte[] optionalEntropy = {7,5,4,9,0};
byte[] dataToBeEncrypted = Encoding.Unicode.GetBytes("Test String");
byte[] encryptedData = ProtectedData.Protect(dataToBeEncrypted, optionalEntropy, DataProtectionScope.CurrentUser);

byte[] decryptedData = ProtectedData.Unprotect(encryptedData, optionalEntropy, DataProtectionScope.CurrentUser);
            string originalData = Encoding.Unicode.GetString(decryptedData);

When you use DPAPI, you can use the machine key store or the user key store. Use machine-level key storage in the following situations:

  • Your application runs on its own dedicated server with no other applications.
  • You have multiple applications on the same server, and you want those applications to be able to share sensitive information.

Use user-level key storage if you run your application in a shared hosting environment and you want to make sure that your application's sensitive data is not accessible to other applications on the server. In this situation, each application should run under a separate identity, and the resources for the application—such as files and databases—should be restricted to that identity.

Use PasswordDeriveBytes for Password-Based Encryption

The System.Security.Cryptography.DeriveBytes namespace provides PasswordDeriveBytes for use when you encrypt data based on a password the user supplies. To decrypt the data, the user must supply the same password he or she used to encrypt it.

Note that you should not use this approach for password authentication. Store a password verifier in the form of a hash value with a salt value to authenticate a user's password. Use PasswordDeriveBytes to generate keys for password-based encryption.

PasswordDeriveBytes accepts a password, salt, an encryption algorithm, a hashing algorithm, key size (in bits), and initialization vector data to create a symmetric key to be used for encryption. After the key is used to encrypt the data, clear it from memory but persist the salt and initialization vector. These values should be protected and are needed to regenerate the key for decryption.

The following code shows how to call PasswordDeriveBytes.

using System.Security.Cryptography;
// Get salt (random bytes) uisng RNGCryptoServiceProvider.
byte[] salt = new byte[8];
new RNGCryptoServiceProvider().GetBytes(salt);

// Get the  PasswordDeriveBytes using password and salt
PasswordDeriveBytes passwordBytes = new PasswordDeriveBytes("P@ssword!", salt);

// Create a TripleDESCryptoServiceProvider object.
TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider();

// Create the key and add it to the Key property.
tdes.Key = passwordBytes.CryptDeriveKey("TripleDES", "SHA1", 192, tdes.IV);

Use the TripleDES object to encrypt and decrypt data.

Do Not Store Keys in Code

Do not store keys in code because an attacker can disassemble hard-coded keys in your compiled assembly by using tools similar to ILDASM, which will render your key in plain text. Instead use DPAPI to encrypt the encryption key and store it in a protected registry key. Create an ACL to protect the registry key that allows full control for administrators and read-only access for your application's process account.

If you need to encrypt data and decrypt data by using symmetric encryption, protect the symmetric encryption key. Use DPAPI to encrypt it, and then store the resulting cipher text in a protected registry key.

Restrict Access to Persisted Keys

When you store keys to be used at run time in persistent storage, use appropriate ACLs and limit access to the keys. Grant access to the keys only to administrators, SYSTEM, and the identity of the code at run time (for example, the Network Service account in the case of ASP.NET applications that run in the default application pool).

When you back up a key, do not store it in plain text. Instead, use DPAPI to encrypt it or use a strong password and place it on removable media.

Cycle Keys Periodically

You should change your encryption keys from time to time because a static secret is more likely to be discovered over time. Did you write it down somewhere? Did the administrator with access to the secrets change positions in your company or leave the company? Are you using the same session key to encrypt communication for a long time? Also, do not overuse keys.

Protect Exported Private Keys

If your private key used for asymmetric encryption and key exchange is compromised, do not continue to use it. Immediately notify the users of the public key that the key has been compromised. If you used the key to sign documents, use a new key to sign them again.

If the private key of your certificate is compromised, contact the issuing certification authority to have your certificate placed on a certificate revocation list. Also, change the way your keys are stored to avoid a future compromise.

Sensitive Data

Sensitive data usually needs to be protected in persistent storage, in memory, and while it is on the network. Where possible, look for opportunities to avoid storing sensitive data. For example, store password hashes rather than the passwords themselves. To make sure that sensitive data cannot be viewed, use encryption and carefully examine the way in which you protect the encryption key.

To help protect sensitive data:

  • Use protected configuration to protect sensitive data in configuration files.
  • Minimize the exposure of secrets in memory.
  • Where possible, use SecureString rather than System.String.

Use Protected Configuration to Protect Sensitive Data in Configuration Files

Use .NET 2.0 protected configuration to protect sensitive data in configuration files. For ASP.NET Web.config files, you can use the Aspnet_regiis tool to encrypt specific sections. The sections of a Web.config file that usually contain sensitive information that you need to encrypt are the following:

  • <appSettings>. This section contains custom application settings.
  • <connectionStrings>. This section contains connection strings.
  • <identity>. This section can contain impersonation credentials.
  • <sessionState>. The section contains the connection string for the out-of-process session state provider.

Protected configuration supports DPAPI and RSA encryption. To use the DPAPI provider with the machine key store (the default configuration) to encrypt the connectionStrings section, run the following command from a command prompt:

aspnet_regiis -pe "connectionStrings" -app "/MachineDPAPI"

-prov "DataProtectionConfigurationProvider"


  • -pe specifies the configuration section to encrypt.
  • -app specifies your Web application's virtual path. If your application is nested, you need to specify the nested path from the root directory, for example "/test/aspnet/MachineDPAPI"
  • -prov specifies the provider name.

The .NET Framework 2.0 SDK supports RSAProtectedConfigurationProvider and DPAPIProtectedConfigurationProvider protected configuration providers, which you use with the Aspnet_regiis tool.

  • RSAProtectedConfigurationProvider. This is the default provider and uses the RSA public key encryption to encrypt and decrypt data. Use this provider to encrypt configuration files for use on multiple Web servers in a Web farm.
  • DPAPIProtectedConfigurationProvider. This provider uses DPAPI to encrypt and decrypt data. Use this provider to encrypt configuration files for use on a single Windows Server.

For more information, see the following documents:

Minimize the Exposure of Secrets in Memory

When manipulating secrets, consider how the secret data is stored in memory. How long is the secret data retained in clear text format? Clear text secrets held in your process address space are vulnerable if an attacker is able to probe your application's address space. Also, if the page of memory containing the secret is swapped out to the page file, the secret data is vulnerable if someone gains access to the page file. Similarly, clear text secrets held in memory appear in the crash dump file if a process crashes. To minimize the exposure of secrets in memory, consider the following measures:

  • Avoid creating multiple copies of the secret. Having multiple copies of the secret data increases your attack surface. Pass references to secret data instead of making copies of the data. Also realize that if you store secrets in immutable System.String objects, after each string manipulation, a new copy is created.
  • Keep the secret encrypted for as long as possible. Decrypt the data at the last possible moment before you need to use the secret.
  • Clean the clear text version of the secret as soon as you can. Replace the clear text copy of the secret data with zeros as soon as you have finished with it.

Prior to .NET Framework 2.0, the use of byte arrays was recommended to help implement these guidelines. Byte arrays can be pinned in memory, encrypted, and replaced with zeros. In .NET Framework 2.0, use SecureString instead.

Where Possible, Use SecureString Rather than System.String

Consider using the System.Security.SecureString type to help protect secrets in memory. SecureString objects use DPAPI encryption to store data in an encrypted format in memory. They are only decrypted when they are accessed. Although you have to decrypt the data to use it, by using SecureString instead of System.String you gain a number of benefits:

  • You help to minimize the number of copies of the secret held in memory, which reduces the attack surface.
  • You reduce the amount of time that the secret is visible to an attacker who has access either to your process memory address space or to the page file.
  • You increase the likelihood that an encrypted version of the secret rather than a clear text version will end up in a dump file if your process crashes.
Note   Unfortunately, in many scenarios you are forced to convert the SecureString to a System.String before you can use it. For example, few .NET Framework API methods currently provide overloads that support SecureString. Use of SecureString is less appropriate in ASP.NET applications. For example, It is unlikely you can take a credit card number from a Web page without the number at some point passing through a System.String because most of the form-related APIs do not have function overloads that permit use of SecureString instead of System.String.

Creating a SecureString

You can create a SecureString by supplying a pointer to a character array and supplying the length of that array. When constructed this way, the SecureString type takes a copy of your array. You should replace your source array with zeros as soon as the SecureString is constructed. A SecureString can also be constructed without an existing character array, and data can be copied one character at a time. The following code sample shows how to use the AppendChar method to create a secure string one character at a time.

using System.Security;
SecureString securePassword = new SecureString(); 
Console.WriteLine("Enter Password...."); 
while (true) 
  ConsoleKeyInfo conKeyInfo = Console.ReadKey(true);
  if (conKeyInfo.Key == ConsoleKey.Enter)
  else if (conKeyInfo.Key == ConsoleKey.Escape)
  else if (conKeyInfo.Key == ConsoleKey.Backspace)
     if (securePassword.Length != 0)
        securePassword.RemoveAt(securePassword.Length - 1);

Retrieving Data from a SecureString

You retrieve data from a SecureString by using the marshaller. The Marshal class has been extended to provide methods that convert a SecureString into a BSTR data type or a raw block of ANSI or Unicode memory. When you have finished using the unprotected string, you should erase that copy by calling Marshal.ZeroFreeBSTR, as shown in the following example.

using System.Security;
using System.Runtime.InteropServices;
void UseSecretData(SecureString secret)
  IntPtr bstr = Marshal.SecureStringToBSTR(secret);
    // Use the bstr here
    // Make sure that the clear text data is zeroed out

Why Not Use System.String?

Using System.String for storing sensitive information is not recommended for the following reasons:

  • It is not pinned, which means that the garbage collector can move it around and leave the data in memory for indeterminate amounts of time.
  • It is not encrypted; therefore, the data can be read from process memory or from the swap file.
  • It is immutable; therefore, there is no effective way of clearing the data after use. Modification leaves both the old copy and a new copy in memory.

Unmanaged Code

Give special attention to code that calls unmanaged code, including Win32 DLLs and COM objects, due to the increased security risk. Unmanaged code is not verifiably type safe and introduces the potential for buffer overflows. Resource access from unmanaged code is not subject to code access security checks. This is the responsibility of the managed wrapper class.

Consider the following guidelines when calling unmanaged code.

  • Use naming conventions (safe, native, unsafe) to identify unmanaged APIs.
  • Isolate unmanaged API calls in a wrapper assembly.
  • Constrain and validate string parameters.
  • Validate array bounds.
  • Check file path lengths.
  • Use the /GS switch to compile unmanaged code.
  • Inspect unmanaged code for dangerous APIs.
  • Avoid exposing unmanaged types or handles to partially trusted code.
  • Use SuppressUnmanagedCodeSecurity with caution.

Use Naming Conventions (Safe, Native, Unsafe) to Identify Unmanaged APIs

Unmanaged code represents a significant risk if used improperly from managed code. Apply naming conventions so that you can be reminded of native code risks when you develop, review, or revise code. Categorize your unmanaged code by using a prefix to encapsulate the types of unmanaged APIs. Use the following naming convention.

  • Safe. This identifies code that poses no possible security threat. It is harmless for any code, malicious or otherwise, to call. An example is code that returns the current processor tick count. Safe classes can be annotated with SuppressUnmanagedCodeSecurityAttribute, which turns off the code access security permission demand for full trust.
    class SafeNativeMethods {
           internal static extern void MessageBox(string text);
  • Native. This is potentially dangerous unmanaged code, but code that is protected with a full stack walking demand for the unmanaged code permission. These are implicitly made by the interop layer, unless they have been suppressed with SupressUnmanagedCodeSecurityAttribute.
    class NativeMethods {
           internal static extern void FormatDrive(string driveLetter);
  • Unsafe. This is potentially dangerous unmanaged code that has the security demand for the unmanaged code permission declaratively suppressed. These methods are potentially dangerous. Any caller of these methods must do a full security review to make sure that the usage is safe and protected because no stack walk is performed.
    class UnsafeNativeMethods {
           internal static extern void CreateFile(string fileName);

Isolate Unmanaged API Calls in a Wrapper Assembly

To make it easier maintain and review your code, place all unmanaged API calls in a wrapper assembly. This allows you to:

  • Easily determine the set of unmanaged APIs your application is dependant on.
  • Isolate your unmanaged calls in a single assembly.
  • Isolate the UnmanagedCodePermission to a single assembly.

Constrain and Validate String Parameters

Certain types of input and data validation vulnerabilities are much more likely to occur in native code than in managed code. To reduce the risk of buffer overrun, format string, and integer overflow bugs, constrain and validate string parameters passed to native code.

Check the length of any input string inside your wrapper code to make sure that it does not exceed the limit defined by the unmanaged API. If the unmanaged API accepts a character pointer, you may not know the maximum permitted string length unless you have access to the unmanaged source.

If you cannot examine the unmanaged code because you do not own it, make sure that you rigorously test the API by passing in deliberately long input strings.

If your code uses a StringBuilder to receive a string passed from an unmanaged API, make sure that it can hold the longest string that the unmanaged API can hand back.

Validate Array Bounds

Array bounds are not automatically checked in native code. If you use an array to pass input to an unmanaged API, check that the managed wrapper verifies that the capacity of the array is not exceeded.

Check File Path Lengths

If the unmanaged API accepts a file name and path, check that it does not exceed 260 characters. This limit is defined by the Win32 MAX_PATH constant. It is very common for unmanaged code to allocate buffers of this length to manipulate file paths. In many cases, the native does not check or constrain input to the maximum length.

Note   Directory names and registry keys can be a maximum of 248 characters long.

Use the /GS Switch to Compile Unmanaged Code

If you own the unmanaged code, use the /GS switch to compile it, which enables stack probes to help detect buffer overflows. For more information about the /GS switch, see Microsoft Knowledge Base article 325483, "WebCast: Compiler Security Checks: The -GS compiler switch" at;en-us;325483.

Inspect Unmanaged Code for Dangerous APIs

If you have access to the source code for the unmanaged code that you are calling, you should subject it to a thorough code review, paying particular attention to parameter handling to make sure that buffer overflows are not possible and that it does not use potentially dangerous APIs. Dangerous APIs include:

  • Threading functions that switch security context.
  • Access token functions, which can make changes to or disclose information about a security token.
  • Credential management functions, including functions that create tokens.
  • Crypto API functions that can decrypt and access private keys.
  • Memory management functions that can read and write memory.
  • LSA functions that can access system secrets.

Avoid Exposing Unmanaged Types or Handles to Partially Trusted Code

Partially trusted code is not allowed access to unmanaged types or handles. If you allow partially trusted callers access to elements received from an unmanaged API, you could open yourself to a luring attack.

Check all types and handles received from unmanaged code, and trace its use through your assemblies.

Use SuppressUnmanagedCode with Caution

If your assembly makes many calls to unmanaged code, the performance overhead associated with multiple unmanaged code permission demands can become an issue.

In this situation, you can use the SupressUnmanagedCodeSecurity attribute on the P/Invoke method declaration. This causes the full demand for the unmanaged permission to be replaced with a link demand which only occurs once at just-in-time (JIT) compilation time.

Link demands make your code vulnerable to luring attacks. To mitigate the risk, you should suppress the unmanaged code permission demand only if your assembly takes adequate precautions to ensure that it cannot be coerced by malicious code to perform unwanted operations. An example of a suitable countermeasure is if your assembly demands a custom permission that more closely reflects the operation being performed by the unmanaged code

Using SuppressUnmanagedCodeSecurity with P/Invoke

The following code shows how to apply the SuppressUnmanagedCodeSecurity attribute to a P/Invoke method declaration.

public NativeMethods
  // The use of SuppressUnmanagedCodeSecurity here applies only to FormatMessage
  [DllImport("kernel32.dll"), SuppressUnmanagedCodeSecurity]
  private unsafe static extern int FormatMessage(
                                      int dwFlags, 
                                      ref IntPtr lpSource, 
                                      int dwMessageId,
                                      int dwLanguageId, 
                                      ref String lpBuffer, int nSize, 
                                      IntPtr *Arguments);

Using SuppressUnmanagedCodeSecurity with COM Interop

For COM interop calls, the attribute must be used at the interface level, as shown in the following example.

public interface IComInterface

Hold Pointers in Private Fields

If you hold pointers to unmanaged memory, for example in IntPtr fields, make sure that you mark the fields as private. This prevents someone from attempting to manipulate the pointer value, perhaps to cause an access violation or perhaps to change or remove the pointer reference and use the pointer to get access to sensitive information.

Companion Guidance

Additional Resources


Provide feedback by using either a Wiki or e-mail:

We are particularly interested in feedback regarding the following:

  • Technical issues specific to recommendations
  • Usefulness and usability issues

Technical Support

Technical support for the Microsoft products and technologies referenced in this guidance is provided by Microsoft Support Services. For product support information, please visit the Microsoft Product Support Web site at

Community and Newsgroups

Community support is provided in the forums and newsgroups:

To get the most benefit, find the newsgroup that corresponds to your technology or problem. For example, if you have a problem with ASP.NET security features, you would use the ASP.NET Security forum.

Contributors and Reviewers

  • External Contributors and Reviewers: Anil John, Johns Hopkins University–Applied Physics Laboratory; Frank Heidt; Jason Taylor, Security Innovation
  • Microsoft Product Group: Charlie Kaufman, Don Willits, Mike Downen, Pablo Castro, Stefan Schackow
  • Microsoft IT Contributors and Reviewers: Akshay Aggarwal, Shawn Veney, Talhah Mir
  • Microsoft Services and PSS Contributors and Reviewers: Adam Semel, Tom Christian, Wade Mascia
  • Microsoft patterns & practices Contributors and Reviewers: Carlos Farre
  • Test team: Larry Brader, Microsoft Corporation; Nadupalli Venkata Surya Sateesh, Sivanthapatham Shanmugasundaram, Infosys Technologies Ltd.
  • Edit team: Nelly Delgado, Microsoft Corporation
  • Release Management: Sanjeev Garg, Microsoft Corporation

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.