We recommend using Visual Studio 2017

Secure Coding Guidelines for the .NET Framework


Microsoft Corporation

January 2002

Applies to:
   Microsoft .NET Framework

Summary: The common language runtime and the Microsoft .NET Framework enforce evidence-based security on all managed code applications. Most code requires little to no explicit coding for security. This paper briefly describes the security system, discusses the security issues you might need to consider in your code, and provides guidelines for classifying your components so you know what issues you might need to address to ensure that the code is secure. (28 printed pages)

Prerequisites: Readers should be familiar with the common language runtime and the Microsoft® .NET Framework, including basic knowledge of evidence-based security and code access security.


Evidence-Based Security and Code Access Security
Goals of Secure Coding
Approaches to Secure Coding
Best Practices for Secure Coding
Securing State Data
Securing Method Access
Wrapper Code
Unmanaged Code
User Input
Remoting Considerations
Protected Objects
Application Domain Crossing Issues
Assessing Permissions
Other Security Technologies

Evidence-Based Security and Code Access Security

Two separate technologies work together to protect managed code:

  • Evidence-based security determines what permissions to grant to code.
  • Code access security checks that all code on the stack has the necessary permissions to do something.

Permissions bind these technologies together: a permission is the right to perform a specific protected operation. For example, "to read c:\temp" is a file permission; "to connect to www.msn.com" is a network permission.

Evidence-based security determines the permissions granted to code. Evidence is the information known about any assembly (the unit of granting permission) that is used as input to the security policy mechanism. Given evidence as input, security policy set by the administrator is evaluated to determine what permissions can be given to the code. The code itself can use a permission request to influence the permissions that are granted. The permission request is expressed as assembly-level declarative security using custom attribute syntax. However, the code can never in any way cause more or fewer permissions to be granted to it than the policy system allows. Permission grants occur once and specify the rights of all code in the assembly. To view or edit your security policy, use the .NET Framework configuration tool (Mscorcfg.msc).

The following table shows some common types of evidence that the policy system uses to grant permissions to code. In addition to the standard types of evidence listed here, which are provided by the security system, it is also possible to extend the set of evidence with new types that customers may define.

Hash Hash of the assembly
Publisher AuthentiCode® signer
StrongNamePublic key+name+version
Site Web site of code origin
Url URL of code origin
Zone Internet Explorer zone of code origin

Code access security handles the security checks that enforce granted permissions. The unique aspect of these security checks is that they check not only the code that is attempting to do a protected operation, but also all of its callers up the stack. All checked code must have the necessary permission (subject to overrides) for the check to succeed.

Security checks are beneficial because they prevent luring attacks, where unauthorized code calls your code and tricks it into doing something on behalf of the unauthorized code. Suppose you have an application that reads a file, and security policy grants your code permission to do this. Because all your application code has permission, code access security checks will pass. However, if malicious code, which does not have access to the file, calls any of your code in any way, the security check will fail because that less trusted code will be visible on the stack by virtue of calling your code.

It is important to note that all of this security is based on the enforcement of what code is allowed to do. Authorization of users based on logon information is a completely separate security feature of the underlying operating system. Think of these two security systems as a multi-layered defense: to access a file, for example, both code and user authorizations must be passed. Although user authorization is important in many applications that depend on user logon information or other credentials to control what certain users can and cannot do, this type of security is not a focus of this document.

Goals of Secure Coding

It is assumed that security policy is correct, and that potentially malicious code will not have permissions that trusted code is granted that allow it to do more powerful things safely. (To assume otherwise makes one type of code indistinguishable from the other, making the problem impossible.) Using the .NET Framework-enforced permissions, and other enforcement in your code, you must erect barriers to prevent the malicious code from obtaining information that you do not want it to have or from performing undesirable actions. Additionally, a balance must be struck between security and usability of the code in all the intended scenarios by trusted code.

Evidence-based security policy and code access security provide very powerful, explicit mechanisms to implement security. Most application code simply needs to use the infrastructure implemented by the .NET Framework. In some cases, additional application-specific security is required, built either by extending the security system or by using new ad hoc methods.

Approaches to Secure Coding

One advantage of these security technologies is that you can usually forget about them. If your code is granted the permissions it needs to do its job, things will just work (while you enjoy protection against potential onslaughts, such as the luring attack described previously). However, there are a few specific situations where you must explicitly address security. The sections that follow describe these approaches. Even if these sections do not apply directly to you, understanding these security issues might prove useful.

Security-Neutral Code

Security-neutral code does nothing explicit with the security system. It runs with whatever permissions it receives. Although failing to catch security exceptions with protected operations (such as using files, networking, and so on) can result in an ugly user experience (an exception with many details that are totally obscure to most users), this approach takes advantage of the security technologies because even highly trusted code will not open holes in security protection. The worst that can happen is that callers will need many permissions or will be stopped by security.

A security-neutral library has special characteristics that you should understand. Suppose your library provides API elements that use files or call unmanaged code; if your code does not have the corresponding permission, it will not run as described. However, even if the code has the permission, any application code that calls it must have the same permission in order to work. If the calling code does not have the right permission, the security exception will appear as a result of the code access security stack walk. If it is acceptable to require your callers to have permissions for everything your library does, this is an easy and safe way to implement security because it does not involve a risky security override. However, if you want application code that calls your library to be shielded from the effects of permission demands and relieved from the need to have what could be very powerful permissions, you must look at the library model that works with protected resources, which is described in the Library Code that Exposes Protected Resources section of this document.

Application Code That Is Not a Reusable Component

If your code is part of an application that will not be called by other code, security is simple and special coding might not be required. However, remember that malicious code can call your code. While code access security might stop malicious code from accessing resources, such code could still read values of your fields or properties that might contain sensitive information.

Additionally, if your code accepts user input from the Internet or other unreliable sources, you must be careful of malicious input.

For more information, see Securing State Data and User Input in this document.

Managed Wrapper to Native Code Implementation

Typically in this scenario, some useful functionality is implemented in native code and you want to make it available to managed code without rewriting it as such. Managed wrappers are easy to write as either platform invokes or using COM interop. However, if you do this, callers of your wrappers must have unmanaged code rights to succeed. Under default policy, this means that intranet- and Internet-downloaded code will not work with the wrappers.

Rather than giving all applications that use these wrappers unmanaged code rights, it is better to give these rights only to the wrapper code. If the underlying functionality is safe (exposes no resources) and the implementation is safe, the wrapper only needs to assert its rights, which enables any code to call through it. When resources are involved, security coding should be the same as the library code case described in the next section. Because the wrapper is potentially exposing callers to these issues, careful verification of the safety of the native code is necessary and is the wrapper's responsibility.

For more information, see the Unmanaged Code and Assessing Permissions sections of this document.

Library Code That Exposes Protected Resources

This is the most powerful and hence potentially dangerous (if done incorrectly) approach for security coding: your library serves as an interface for other code to access certain resources that are not otherwise available, just as the classes of the .NET Framework enforce permissions for the resources they use. Wherever you expose a resource, your code must first demand the permission appropriate to the resource (that is, do a security check) and then typically assert its rights to perform the actual operation.

For more information, see the Unmanaged Code and Assessing Permissions sections of this document.

Best Practices for Secure Coding

Note   Code samples are written in C# unless otherwise specified.

Permission requests are a great way to make your code security aware. These requests allow you to do two things:

  • Request the minimum permissions your code must receive to run.
  • Ensure that your code receives no more permissions than it actually needs.

    For example:

     (SecurityAction.RequestMinimum, Write="C:\\test.tmp")]
     (SecurityAction.RequestOptional, Unrestricted=false)]

This example tells the system that the code should not be run unless it receives permission to write C:\test.tmp. If the code ever encounters security policy that does not grant this permission, a PolicyException will be raised and the code will not run. You can be sure that your code will be granted this permission and you do not have to worry about errors caused by having too few permissions.

This example also tells the system that no additional permissions are wanted. Absent this, your code will be granted whatever permissions policy chooses to give it. While extra permissions do not cause harm, if there is a security bug somewhere, having fewer permissions could well close the hole. Carrying permissions that your code does not need can lead to security problems.

Another way to limit the permissions your code receives to the fewest privileges is to list specific permissions you want to refuse. Permissions are typically refused when you ask that all permissions be optional and exclude specific permissions from that request.

Securing State Data

Applications that handle sensitive data or make any kind of security decisions need to keep that data under their own control and cannot allow other potentially malicious code to access the data directly. The best way to keep data securely in memory is as private or internal (scope limited to the same assembly) variables. However, even this data is subject to access you should be aware of:

  • Under reflection, highly trusted code that has reference to your object can get and set private members.
  • Using serialization, highly trusted code can effectively get and set private members if it can access the corresponding data in the serialized form of the object.
  • Under debugging, this data can be read.

Make sure none of your own methods or properties expose these values unintentionally.

In some cases, data can be secured as "protected," with access limited to the class and its derivatives. However, you should take the following additional precautions due to additional exposure:

  • Control what code is allowed to derive from your class by restricting it to the same assembly, or by using declarative security to require some identity or permissions in order to derive from your class (see the Securing Method Access section of this document).
  • Ensure that all derived classes implement similar protection or are sealed.

Boxed Value Types

Boxed value types can sometimes be modified in cases where you think you have distributed a copy of the type that cannot modify the original. When you return a boxed value type, you are returning a reference to the value type, not a reference to a copy of the value type, thus allowing the code that called your code to modify the value of your variable.

The following C# code example shows how boxed value types can be modified using a reference.

using System;  
using System.Reflection;  
using System.Reflection.Emit;
using System.Threading;  
using System.Collections; 
class bug {
 // Suppose you have an API element that exposes a 
 // field through a property with only a get accessor.
 public object m_Property;
 public Object Property {
   get { return m_Property;}
   set {m_Property = value;} // (if applicable)
 // You can modify the value of this by doing 
 // the byref method with this signature.
 public static void m1( ref int j ) {
   j = Int32.MaxValue;
public static void m2( ref ArrayList j )
  j = new ArrayList();
 public static void Main(String[] args)
  Console.WriteLine( "////// doing this with value type" );
    bug b = new bug();
    b.m_Property = 4;
    Object[] objArr = new Object[]{b.Property};
    Console.WriteLine( b.m_Property );
    typeof(bug).GetMethod( "m1" ).Invoke( null, objArr );
    // Note that the property changed.
    Console.WriteLine( b.m_Property ); 
    Console.WriteLine( objArr[0] );
  Console.WriteLine( "////// doing this with a normal type" );
    bug b = new bug();
    ArrayList al = new ArrayList();
    b.m_Property = al;
    Object[] objArr = new Object[]{b.Property};
    Console.WriteLine( ((ArrayList)(b.m_Property)).Count );
    typeof(bug).GetMethod( "m2" ).Invoke( null, objArr );
    // Note that the property does not change.
    Console.WriteLine( ((ArrayList)(b.m_Property)).Count ); 
    Console.WriteLine( ((ArrayList)(objArr[0])).Count );

Securing Method Access

Some methods might not be suitable to allow arbitrary untrusted code to call. Such methods pose several risks: the method might provide some restricted information; it might believe any information passed to it; it might not do error checking on the parameters; or with the wrong parameters, it might malfunction or do something harmful. You should be aware of these cases and take suitable action to secure the method.

In some cases, you might need to restrict methods that are not intended for public use, but still must be public. For example, you might have an interface that needs to be called across your own DLLs, and hence must be public, but you do not want to expose it publicly to prevent customers from using it or to prevent malicious code from exploiting the entry point into your component. Another common reason to restrict a method not intended for public use (yet that must be public) is to avoid having to document and support what may be a very internal interface.

Managed code affords several ways to restrict method access:

  • Limit the scope of accessibility to the class, assembly, or derived classes, if they can be trusted. This is the simplest way to limit method access. Note that, in general, derived classes can be less trustworthy than the class they derive from, yet in some cases they share the superclass identity. In particular, do not infer trust from the keyword protected, which is not necessarily used in the security context.
  • Limit the method access to callers of a specified identity (essentially, any particular evidence you choose).
  • Limit the method access to callers having whatever permissions you select.

Similarly, declarative security allows you to control inheritance of classes. You can use InheritanceDemand to do the following:

  • Require derived classes to have a specified identity or permission.
  • Require derived classes that override specific methods to have a specified identity or permission.

Example: Securing Access to a Class or Method

The following example shows how to secure a public method for limited access.

  1. The sn -k command creates a new private/public key pair. The private part is needed to sign the code with a strong name and is kept securely by the publisher of the code. (If revealed, anyone could impersonate your signature on their code, defeating the protection.)

    Securing a method by strong name identity

    sn -k keypair.dat
    csc/r:App1.dll /a.keyfile:keypair.dat App1.cs
    sn -p keypair.dat public.dat
    sn -tp public.dat >publichex.txt
    public class Class1
  2. The csc command compiles and signs App1, authorizing it to access the protected method.
  3. The next two sn commands extract the public key portion from the pair and format it in hexadecimal.
  4. The lower half of the example is a source code excerpt of the protected method. The custom attribute defines the strong name and specifies the public key of the key pair, with the hexadecimal format data from sn inserted for the PublicKey attribute.
  5. At run time, App1 has the required strong name signature and is allowed to use Class1.

This sample uses a LinkDemand to protect an API element; see later sections of this document for important information about the limitations of using LinkDemand.

Excluding Classes and Methods from Use by Untrusted Code

Use the following declarations to prevent classes and methods (including properties and events) from being used by partially trusted code. By applying these declarations to a class, the protection will be applied to all its methods, properties, and events; however, note that field access is not affected by declarative security. Note that link demands only protect against the immediate callers and might still be subject to luring attacks, described in the Evidence-Based Security and Code Access Security section of this document.

Strong-named assemblies will have declarative security applied to all publicly accessible methods, properties, and events therein to restrict their use to fully trusted callers, unless the assembly explicitly opts in by applying the AllowPartiallyTrustedCallers attribute. Thus, explicitly marking classes to exclude untrusted callers is only necessary for unsigned assemblies or assemblies with this attribute, for subset of types therein that are not intended for untrusted callers. For full details, see the Version 1 Security Changes for the Microsoft .NET Framework document.

  • For public non-sealed classes:
      Permissions.SecurityAction.InheritanceDemand, Name="FullTrust")]
      (System.Security.Permissions.SecurityAction.LinkDemand, Name="FullTrust")]
    public class CanDeriveFromMe
  • For public sealed classes:
      (System.Security.Permissions.SecurityAction.LinkDemand, Name="FullTrust")]
    public sealed class CannotDeriveFromMe
  • For public abstract classes:
      (System.Security.Permissions.SecurityAction.InheritanceDemand, Name="FullTrust")]
      (System.Security.Permissions.SecurityAction.LinkDemand, Name="FullTrust")]
    public abstract class CannotCreateInstanceOfMe_CanCastToMe
  • For public virtual functions:
    class Base {
    public override void CanOverrideOrCallMe() { ... }
  • For public abstract functions:
    class Base {
      (System.Security.Permissions.SecurityAction.InheritanceDemand, Name="FullTrust")]
    public override void CanOverrideMe() { ... }
  • For public override functions where the base does not demand full trust:
    class Derived {
    (System.Security.Permissions.SecurityAction.Demand, Name="FullTrust")]
    public override void CanOverrideOrCallMe() { ... }
  • For public override functions where the base demands full trust:
    class Derived {
    public override void CanOverrideOrCallMe() { ... }
  • For public interfaces:
    public interface CanCastToMe

Demand Versus LinkDemand

Declarative security offers two kinds of security checks that are similar but perform very different checks. It is worth your time to understand both forms because the wrong choice can result in weak security or performance loss. This section is not intended to be a thorough description of these features; see the product documentation for full details.

Declarative security offers the following security checks:

  • Demand specifies the code access security stack walk: all callers on the stack must have the permission or identity to pass. Demand occurs on every call because the stack might contain different callers. If you call a method repeatedly, this security check occurs each time. Demand is strong against luring attacks; unauthorized code trying to get through it will be caught.
  • LinkDemand happens at just-in-time (JIT) compilation time (in the previous example, when App1 code that references Class1 is about to execute) and it checks only the immediate caller. This security check does not check the caller's caller. Once this check passes, there is no additional security overhead no matter how many times it might call. However, there is also no protection from luring attacks. With LinkDemand, your interface is safe but any code that passes the test and can reference your code can potentially break security by allowing malicious code to call using the authorized code. Therefore, do not use LinkDemand unless all the possible weaknesses can be thoroughly avoided.

The extra precautions required when using LinkDemand must be "hand crafted" (the security system can help with enforcement). Any mistake opens a security weakness. All authorized code that uses your code must be responsible for implementing additional security by doing the following:

  • Restricting the calling code's access to the class or assembly.
  • Placing the same security checks on that code and obligating its callers to do so. For example, if you write code that calls a method that is protected with a LinkDemand for the SecurityPermission.UnmanagedCode permission, your method should also make a LinkDemand (or Demand, which is stronger) for this permission. The exception is if your code uses the LinkDemand-protected method in a limited way that is always safe or that you decide is safe, given other security protection mechanisms (such as demands) in your code. This exceptional case is where the caller takes responsibility in weakening the security protection on the underlying code.
  • Ensuring that its callers cannot trick it into calling the protected code on their behalf (that is, callers cannot force the authorized code to pass specific parameters to the protected code, or to get results back from it).

Interfaces and LinkDemands

If a virtual method, property, or event with LinkDemand overrides a base class method, the base class method must also have the same LinkDemand for the overridden method to be secure. It is possible for malicious code to cast back to the base type and call the base class method. Also note that LinkDemands can be added implicitly to assemblies that do not have the AllowPartiallyTrustedCallersAttribute assembly-level attribute.

It is a good practice to protect method implementations with LinkDemands when interface methods also have LinkDemands.

Note the following about using LinkDemands with interfaces:

  • The AllowPartiallyTrustedCallers attribute can affect interfaces.
  • You can place LinkDemands on interfaces to selectively opt out certain interfaces from partially trusted code use, such as when using the AllowPartiallyTrustedCallers attribute.
  • If you have an interface defined in an assembly that does not contain the AllowPartiallyTrustedCallers attribute, you can implement that interface on a partially trusted class.
  • If you place a LinkDemand on a public method of a class that implements an interface method, the LinkDemand will not be enforced if you then cast to the interface and call the method. In this case, because you linked against the interface, only the LinkDemand on the interface is honored.

The following items should be reviewed for security issues:

  • Explicit link demands on interface methods. Make sure these link demands offer the expected protection. Determine whether malicious code can use a cast to get around the link demands as described previously.
  • Virtual methods with link demands.
  • Types and the interfaces they implement should use LinkDemands consistently.

Virtual Internal Overrides

There is a nuance of the type system accessibility to be aware of when confirming that your code is unavailable to other assemblies. A method that is declared virtual and internal can override the super class vtable entry and can be used only from within the same assembly because it is internal. However, the accessibility for overriding is determined by the virtual keyword and this can be overridden from another assembly as long as that code has access to the class itself. If the possibility of an override presents a problem, use declarative security to fix it or remove the virtual keyword, if it is not strictly required.

Wrapper Code

Wrapper code—especially where the wrapper has higher trust than code that uses it—can open a unique set of security weaknesses. Anything done on behalf of a caller, where the caller's limited permissions are not included in the appropriate security check, is a potential weakness to be exploited.

Never enable something through the wrapper that the caller could not do itself. This is a special danger when doing something that involves a limited security check (as opposed to a full stack walk demand). When single-level checks are involved, interposing the wrapper code between the real caller and the API element in question can easily cause the security check to succeed when it should not, thereby weakening security.


Whenever your code takes delegates from less trusted code that might call it, make sure that you are not enabling the less trusted code to escalate its permissions. If you take a delegate and use it later, the code that created the delegate is not on the call stack and its permissions will not be tested if code in or under the delegate attempts a protected operation. If your code and the delegate code have higher privileges than the caller, this provides a way for the caller to orchestrate the call path without being part of the call stack.

To address this issue, you can either limit your callers (for example, by requiring a permission) or restrict permissions under which the delegate can execute (for example, by using a Deny or PermitOnly stack override).

LinkDemands and Wrappers

There is a special protection case with link demands that has been strengthened in the security infrastructure, yet is still a source of possible weakness in your code.

If fully trusted code calls a property, event, or method protected by a LinkDemand, the call will succeed if the LinkDemand permission check for the caller is satisfied. Additionally, if the fully trusted code exposes a class that takes the name of a property and calls its get accessor using reflection, that call to the get accessor will succeed even though the user code does not have the right to access this property. This is because the LinkDemand will check only the immediate caller, which is the fully trusted code. In essence, the fully trusted code is making a privileged call on behalf of user code without making sure that the user code has the right to make that call. If you are wrapping reflection functionality, see the Version 1 Security Changes for the Microsoft .NET Framework article for details.

To prevent inadvertent security holes such as those described above, the runtime extends the check into a full stack-walking demand on any use of invoke (instance creation, method invocation, property set or get) to a method, constructor, property, or event protected by a link demand. This protection incurs some performance costs (the one-level LinkDemand was faster) and changes the semantics of the security check—the full stack-walk demand might fail where the one-level check would have passed.

Assembly Loading Wrappers

Several methods used to load managed code, including Assembly.Load(byte[]), load assemblies with the evidence of the caller. Specifically, if you wrap any of these methods, the security system could use your code's permission grant, instead of the permissions of the caller to your wrapper, to load the assemblies. Obviously, you do not want to allow less trusted code to have you load code on its behalf that is granted higher permissions than those of the caller to your wrapper.

Any code that has full trust or significantly higher trust than a potential caller (including an Internet-permissions-level caller) could be vulnerable to weakening security in this way. If your code has a public method that takes a byte array and passes it to Assembly.Load(byte[]), thereby creating an assembly on the caller's behalf, it might break security.

This issue applies to the following API elements:

  • System.AppDomain.DefineDynamicAssembly
  • System.Reflection.Assembly.LoadFrom
  • System.Reflection.Assembly.Load

Exception Handling

A filter expression further up the stack will run before any finally statement. The catch block associated with that filter runs after the finally statement. Consider the following pseudocode:

void Main() {
    try {
    } except (Filter()) {
bool Filter () {
    return true;
void Sub() {
    try {
        throw new Exception();
    } finally {

This code prints the following:


The filter runs before the finally statement, so security issues can be introduced by anything that makes a state change where execution of other code could take advantage. For example:

            try {
                           // This means changing anything (state variables,
                           // switching unmanaged context, impersonation, and 
so on)
                           // that could be exploitable if malicious code ran 
before state is restored.
            } finally {
                           // This simply restores the state change above.

This pseudo-code allows a filter back up the stack to run arbitrary code. Other examples of operations that would have similar effect are temporary impersonation of another identity, setting an internal flag that bypasses some security check, changing the culture associated with the thread, and so forth.

The recommended solution is to introduce an exception handler to isolate your code's changes to thread state from callers' filter blocks. However, it is important that the exception handler be properly introduced or this problem will not be fixed. The following Microsoft Visual Basic® example switches the UI culture, but any kind of thread state change could be similarly exposed.

   CultureInfo saveCulture = Thread.CurrentThread.CurrentUICulture;
   try {
      Thread.CurrentThread.CurrentUICulture = new CultureInfo("de-DE");
      // Do something that throws an exception.
   finally {
      Thread.CurrentThread.CurrentUICulture = saveCulture;

Public Class UserCode
   Public Shared Sub Main()
         Dim obj As YourObject = new YourObject
      Catch e As Exception When FilterFunc
         Console.WriteLine("An error occurred: '{0}'", e)
         Console.WriteLine("Current Culture: {0}", 
      End Try
   End Sub

   Public Function FilterFunc As Boolean
      Console.WriteLine("Current Culture: {0}", Thread.CurrentThread.CurrentUICulture)
      Return True
   End Sub

End Class

The correct fix in this case is to wrap the existing try/finally block in a try/catch block. Simply introducing a catch-throw clause into the existing try/finally block will not fix the problem:

   CultureInfo saveCulture = Thread.CurrentThread.CurrentUICulture;

   try {
      Thread.CurrentThread.CurrentUICulture = new CultureInfo("de-DE");
      // Do something that throws an exception.
catch { throw; }
   finally {
      Thread.CurrentThread.CurrentUICulture = saveCulture;

This does not fix the problem because your finally statement has not run before the FilterFunc gets control.

The following code fixes the problem by ensuring that your finally clause has executed before offering an exception up the callers' exception filter blocks.

   CultureInfo saveCulture = Thread.CurrentThread.CurrentUICulture;
   try {
      try {
         Thread.CurrentThread.CurrentUICulture = new CultureInfo("de-DE");
         // Do something that throws an exception.
      finally {
         Thread.CurrentThread.CurrentUICulture = saveCulture;
   catch { throw; }

Unmanaged Code

Some library code will need to call into unmanaged code (for example, native code APIs, such as Win32). Because that means going outside the security perimeter for managed code, due caution is required. If your code is security neutral (see the Security-Neutral Code section of this document), both your code and any code that calls it must have unmanaged code permission (SecurityPermission.UnmanagedCode).

However, it will often be unreasonable to require your caller to have such powerful permissions. In such cases, your trusted code can be the go-between, similar to the managed wrapper or library code described previously. If the underlying unmanaged code functionality is totally safe, it can be directly exposed; otherwise, a suitable permission check (demand) is required first.

When your code calls into unmanaged code but you do not want your callers to have that permission, you must assert your right. An assertion blocks the stack walk at your frame. You must be scrupulously careful that you do not create a security hole in this process. Usually this means that you must demand a suitable permission of your callers and then use only unmanaged code to perform what that permission allows and no more. In some cases (for example, get time of day), unmanaged code can be directly exposed to callers without any security checks. In any case, any code that asserts must take responsibility for security.

Because any managed code that affords a code path into native code is a potential target for malicious code, determining which unmanaged code can be safely used and how it must be used requires extreme care. Generally, no unmanaged code should ever be directly exposed to partially trusted callers (see the following section). There are two primary considerations in evaluating the safety of unmanaged code use in libraries that are callable by partially trusted code:

  • Functionality. Does the unmanaged API provide safe functionality that does not allow potentially dangerous operations to be performed by calling it? Code access security uses permissions to enforce access to resources, so consider whether the API uses files, user interface, threading, or exposes protected information. If it does, the managed code wrapping it must demand the necessary permissions before allowing it to be entered. Additionally, while not protected by a permission, security requires that memory access be confined to strict type safety.
  • Parameter checking. A common attack passes unexpected parameters to exposed unmanaged code API methods in an attempt to cause them to operate out of specification. Buffer overruns are one common example of this type of attack (using out of range index or offset values), or any parameters that might exploit a bug in the underlying code. Thus, even if the unmanaged code API is functionally safe for partially trusted callers (after necessary demands), managed code must also check parameter validity exhaustively to ensure that no unintended calls are possible from malicious code using the managed code wrapper layer.

Using SuppressUnmanagedCodeSecurity

There is a performance aspect to asserting and then calling unmanaged code. For every such call, the security system automatically demands unmanaged code permission, resulting in a stack walk each time. If you assert and immediately call unmanaged code, the stack walk can be meaningless: it consists of your assert and your unmanaged code call.

A custom attribute called SuppressUnmanagedCodeSecurity can be applied to unmanaged code entry points to disable the normal security check that demands SecurityPermission.UnmanagedCode. Extreme caution must always be taken when doing this because this action creates an open door into unmanaged code with no runtime security checks. It should be noted that even with SuppressUnmanagedCodeSecurity applied, there is a one-time security check that happens at JIT time to ensure that the immediate caller has permission to call unmanaged code.

If you use the SuppressUnmanagedCodeSecurity attribute, check the following points:

  • Make the unmanaged code entry point inaccessible outside your code (for example, "internal").
  • Any place you call into unmanaged code is a potential security hole. Make sure your code is not a portal for malicious code to indirectly call into unmanaged code and avoid a security check. Demand permissions, if appropriate.
  • Use a naming convention to make it explicit when you are creating a dangerous path into unmanaged code, as described in the next section.

Naming Convention for Unmanaged Code Methods

A useful and highly recommended convention has been established for naming unmanaged code methods. All unmanaged code methods are separated into three categories: safe, native, and unsafe. These keywords can be used as class names within which the various kinds of unmanaged code entry points are defined. In source code these keywords should be added to the class name; for example, Safe.GetTimeOfDay or Native.Xyz or Unsafe.DangerousAPI. Each of these categories should send a strong message to the developers using them as described in the following table.

KeywordSecurity considerations
safeCompletely harmless for any code (even malicious) to call. Can be used just like other managed code. Example: get time of day.
nativeSecurity neutral; that is, unmanaged code that requires unmanaged code permission to call. Security is checked, which stops an unauthorized caller.
unsafePotentially dangerous unmanaged code entry point with security suppressed. Developers should use the greatest caution when using such unsafe code, making sure that other protections are in place to avoid a security vulnerability. Developers must be responsible as this keyword overrides the security system.

User Input

User data, which is any kind of input (data from a Web request or URL, inputs to controls of a Microsoft Windows Forms application, and so on), can adversely influence code because often that data is used directly as parameters to call other code. This situation is analogous to malicious code calling your code with strange parameters and the same precautions should be taken. User input is actually harder to make safe because there is no stack frame to trace the presence of the potentially untrusted data.

These are among the subtlest and hardest security bugs to find because although they can exist in code that is seemingly unrelated to security, they are a gateway to pass the bad data through to other code. To look for these bugs, follow any kind of input data, imagine what the range of possible values might be, and consider whether the code seeing this data can handle all those cases. You can fix these bugs thorough range checking and rejecting all inputs the code cannot handle.

Some common mistakes involving user data include the following:

  • Any user data in server response runs in the context of the server's site on the client. If your Web server takes user data and inserts it into the returned Web page, it might, for example, include a <script> tag and run as if from the server.
  • Remember that the client can request any URL.
  • Consider tricky or invalid paths:
    • ..\ , extremely long paths.
    • Use of wildcards (*).
    • Token expansion (%token%).
    • Strange forms of paths with special meaning.
    • Alternate NTFS stream names; for example, filename::$DATA.
    • Short versions of file names; for example, longfilename and longfi~1.
  • Eval(userdata) can do anything.
  • Late binding to a name that includes some user data.
  • If you are dealing with Web data, you must consider the various forms of escapes that are permissible, including:
    • Hex escapes (%nn).
    • Unicode escapes (%nnn).
    • Overlong UTF-8 escapes (%nn%nn).
    • Double escapes (%nn becomes %mmnn, where %mm is the escape for '%').
  • Be wary of user names that might have more than one canonical format. Consider that in Microsoft Windows 2000, you can often use the REDMOND\username form or the username@redmond.microsoft.com form.

Remoting Considerations

Remoting allows you to set up transparent calling between application domains, processes, or machines. However, the code access security stack walk cannot cross process or machine boundaries (it does apply between application domains of the same process).

Any class that is remotable (derived from a MarshalByRefObject class) needs to take responsibility for security. Either the code should be used only in closed secure environments where the calling code can be implicitly trusted, or remoting calls should be designed so that they do not expose protected code to outside entry that could be used maliciously.

You should also generally never expose methods, properties, or events that are protected by declarative LinkDemand and InheritanceDemand security checks. With remoting, these checks are not enforced. Other security checks, such as Demand, Assert, and so on, work between application domains within a process, but not cross-process or cross-machine.

Protected Objects

Some objects hold security state in themselves. These objects should not be passed to untrusted code, which would then acquire security authorization beyond its own permissions.

An example is creating a FileStream object. The FileIOPermission is demanded at the time of creation and, if it succeeds, the file object is returned. However, if this object reference is passed to code without file permissions, the object will be able to read/write this particular file.

The simplest defense for such an object is to demand the same FileIOPermission of any code that seeks to get the object reference through a public API element.


Use of serialization can allow other code to see or modify object instance data that would otherwise be inaccessible. As such, a special permission is required of code doing serialization: SecurityPermission.SerializationFormatter. Under default policy, this permission is not given to Internet-downloaded or intranet code; only code on the local machine is granted this permission.

Normally all fields of an object instance are serialized, meaning that data will be represented in the serialized data for the instance. It is possible for code that can interpret the format to determine what the data values are, independent of the accessibility of the member. Similarly, deserialization extracts data from the serialized representation and sets object state directly, again irrespective of accessibility rules.

For any object that could contain security-sensitive data, make that object non-serializable, if possible. If it must be serializable, try to make specific fields that hold sensitive data non-serializable. If this cannot be done, be aware that this data will be exposed to any code that has permission to serialize and make sure that no malicious code can get this permission.

The ISerializable interface is intended to be used only by the serialization infrastructure. However, if unprotected, it can potentially give out sensitive information. If custom serialization is provided by implementing ISerializable, make sure you take the following precautions:

  • GetObjectData should be explicitly secured by either demanding the SecurityPermission.SerializationFormatter permission or by making sure that no sensitive information is handed out with the method output. For example:
    public override void GetObjectData(SerializationInfo info, 
    StreamingContext context)
  • The special constructor used for serialization should also do thorough input validation and should be either protected or private to avoid abuse by malicious code. It should enforce the same security checks and permissions required to get obtain an instance of such a class by any other means (explicit creating or indirectly creating through some kind of factory).

Application Domain Crossing Issues

To isolate code in managed hosting environments, it is common to spawn multiple child application domains with explicit policy reducing the permission levels for various assemblies. However, the policy for those assemblies remains unchanged in the default application domain. If the one of the child application domains can force the default application domain to load an assembly, the effect of code isolation is lost and types in those assemblies will be able to run code at a higher level of trust.

An application domain can force another application domain to load an assembly and run code contained therein by calling a proxy to an object hosted in the other application domain. To obtain a cross-application-domain proxy, the application domain hosting the object must hand one out (through a method call parameter or return value), or if the application domain was just created, the creator will have a proxy to the AppDomain object. Thus, to avoid breaking code isolation, an application domain with a higher level of trust should not hand out references to MarshalByRefObject objects in its domain to application domains with lower levels of trust.

Usually, the default application domain will create the child application domains with a control object in each one. The control object manages the new application domain and occasionally take orders from the default application domain, but it will not actually know how to contact the domain directly. Occasionally, the default application domain will call its proxy to the control object. However, there might be cases where it is necessary for the control object to be able to call back to the default application domain. In these cases, the default application domain passes a marshal-by-reference callback object to the constructor of the control object. It is the responsibility of the control object to protect this proxy. If the control object were to place the proxy on a public static field of a public class, or otherwise publicly expose the proxy, this would open up a dangerous mechanism for other code to call back into the default application domain. For this reason, control objects are always implicitly trusted to keep the proxy private.

Assessing Permissions

The premise of evidence-based security is that only trustworthy code will be granted high trust (many powerful permissions) and malicious code will only be granted little or no trust. Default policy as shipped with the .NET Framework uses zones (as determined by Microsoft Internet Explorer) to grant permissions. A simplified description of default policy follows:

  • Local machine zone (for example, c:\app.exe) receives full trust. The assumption is that users should place only code they trust on their machine and that most users do not want to segregate their hard disk with different areas of trust. This code can fundamentally do anything and managed code security is not enforceable, so there is no way to defend against malicious code in this zone.
  • Internet zone (for example, http://www.microsoft.com/) code is granted a very limited set of permissions that are generally considered to be safe to grant, even to malicious code. In general, the code cannot be trusted so it can be safely executed only with a weak set of permissions that it cannot do harm with:
    • WebPermission. Access to the same site server from which it came.
    • FileDialogPermission. Access only to files that the user specifically chooses.
    • IsolatedStorageFilePermission. Persistent storage, isolated by the Web site.
    • UIPermission. Can write within a safe containing UI window.
  • Intranet zone (for example, \\UNC\share) code is granted a slightly stronger set of the Internet permissions, but still no powerful permissions:
    • FileIOPermission. Read-only access to files in the directory it comes from.
    • WebPermission. Access to the server it comes from.
    • DNSPermission. Allows DNS names to be resolved to IP addresses.
    • FileDialogPermission. Access only to files that the user specifically chooses.
    • IsolatedStorageFilePermission. Persistent storage, fewer restrictions.
    • UIPermission. Can freely use its own top-level windows.
  • Restricted sites zone code is granted only the minimal permission to execute.

You should consider your security requirements and modify security policy appropriately. No single security configuration can fit all needs: the default policy is intended to be generally useful without allowing anything that would be dangerous.

Your code will receive different permissions, depending on how it is deployed. Make sure that your code will be granted enough permissions to operate properly. When considering securing your code against attack, think about where the attacking code might come from and how it might access your code.

Dangerous Permissions

Several of the protected operations for which the .NET Framework provides permissions can potentially allow the security system to be circumvented. These dangerous permissions should be given only to trustworthy code, and then only as necessary. There is usually no defense against malicious code if it is granted these permissions.

The dangerous permissions include:

  • SecurityPermission
    • UnmanagedCode. Allows managed code to call into unmanaged code, which is often dangerous.
    • SkipVerification. Without verification, the code can do anything.
    • ControlEvidence. Making up evidence allows security policy to be fooled.
    • ControlPolicy. The ability to modify security policy can disable security.
    • SerializationFormatter. The use of serialization can circumvent accessibility, as discussed previously.
    • ControlPrincipal. The ability to set the current principal can trick role-based security.
    • ControlThread. Manipulation of threads is dangerous because of the security state associated with threads.
  • ReflectionPermission
    • MemberAccess. Defeats accessibility mechanisms (can use private members).

Security and Race Conditions

Another area of concern regards the potential for security holes exploited by race conditions. There are several ways in which this might be manifest. The subsections that follow outline some of the major pitfalls that the developer must avoid.

Race conditions in the dispose method

If a class's Dispose method is not synchronized, it is possible that cleanup code inside of Dispose can be run more than once. Consider the following code:

void Dispose() {
   if( _myObj != null ) {
      _myObj = null;

Because this Dispose implementation is not synchronized, it is possible for Cleanup to be called by first one thread and then a second thread before _myObj is set to null. Whether this is a security concern depends on what happens when the Cleanup code runs. A major issue with unsynchronized Dispose implementations involves the use of resource handles (files, and so on). Improper disposal can cause the wrong handle to be used, which often leads to security vulnerabilities.

Race conditions in constructors

In some applications it might be possible for other threads to access class members before their class constructors have completely run. You should review all class constructors to make sure that there are no security issues if this should happen, or synchronize threads, if necessary.

Race conditions with cached objects

Code that caches security information or Asserts might also be vulnerable to race conditions if other parts of the class are not appropriately synchronized. Consider the following code:

void SomeSecureFunction() {
   if(SomeDemandPasses()) {
      _fCallersOk = true;
      _fCallersOk = false();
void DoOtherWork() {
   if(  _fCallersOK ) {
   else {

If there are other paths to DoOtherWork that can be called from another thread with the same object, an untrusted caller can slip past a demand.

If your code caches security information, make sure that you review it for this vulnerability.

Race conditions in finalizers

Another source of race conditions is objects that reference a static or unmanaged resource that they free in their finalizer. If multiple objects share a resource that is manipulated in a class's finalizer, the objects must synchronize all access to that resource.

Other Security Technologies

This section lists some other security technologies that may be applicable to your code, but cannot be fully covered here.

On-the-Fly Code Generation

Some libraries operate by generating code and running it to perform some operation for the caller. The basic problem is generating code on behalf of lesser trust code and running it at a higher trust. The problem worsens when the caller can influence code generation, so you must ensure that only safe code is generated.

You need to know exactly what code you are generating at all times. This means that you must have strict controls on any values that you get from a user, be they quoted strings (which should be escaped so they cannot include unexpected code elements), identifiers (which should be checked to verify that they are valid identifiers), or anything else. Identifiers can be dangerous because you can modify a compiled assembly so that its identifiers contain strange characters, which will probably break it (although this is often not a security vulnerability).

It is recommended that you generate code with Reflection.Emit, which often helps you avoid many of these problems.

When you compile the code, consider whether there is some way a malicious program could modify it. Is there a small window of time during which malicious code can change source code on disk before the compiler reads it or before your code loads the DLL? If so, you must protect the directory containing these files, using code access security or an Access Control List in the file system, as appropriate.

If a caller can influence the generated code in a way that causes a compiler error, a security vulnerability may also exist there.

Run the generated code at the lowest possible permission settings (using PermitOnly or Deny).

Role-Based Security: Authentication and Authorization

In addition to securing code, some applications will want to implement security protection that limits use to certain users or groups of users. Role-based security, which is not within the scope of this document, is designed to handle those needs.

Handling Secrets

Data can be kept secret fairly effectively while in memory, but persisting it and keeping it secret is difficult to do properly. The first version of the .NET Framework does not provide managed code support for handling secrets. If you have the expertise, the cryptographic library provides much of the basic functionality required.

Encryption and Signatures

The System.Security.Cryptography namespace includes a rich set of cryptographic algorithms. Doing cryptography securely requires some expertise and should not be attempted in an ad hoc fashion. Every facet of handling both the data and the keys involved must be carefully designed and reviewed. Details of cryptography are beyond the scope of this document. Refer to standard references for details.

Random Numbers

System.Security.Cryptography.RandomNumberGenerator should be used to generate any random number that might be used in security, where true randomness is required. Use of pseudo-random number generators can involve predictability that can be exploited.

Setup Issues

This section describes consideration for testing setup for your application or components to ensure best security practice and protect the installed code. The following steps are recommended when installing managed code or unmanaged code to ensure that the installation itself is secure. These steps should be performed for all platforms that support NTFS:

  1. Set up a system with two partitions.
  2. Freshly format the second partition; do not change the default ACL on the root of the drive.
  3. Install the product, changing the install directory to point to a new directory on the second partition.

Validate the following:

  1. Is any code that executes as a service or that normally is run by administrator-level users now world-writable?
  2. If the code were installed on a terminal server system in application server mode, can your users now write binaries that other users might run?
  3. Is there anything that ends up in a system area or subdirectory of a system area that might be writable by non-administrators?

Additionally, if the product interacts with the Web, be aware that occasional Web server exploits allow users to run commands that are often executed in the context of the IUSR_MACHINE account. Validate that there are no files or configuration items that are world-writable that a guest account could leverage under these conditions.