Are You in the Know?

Find Out What's New with Code Access Security in the .NET Framework 2.0

Mike Downen

This article is based on a prerelease version of the .NET Framework 2.0. All information herein is subject to change.

This article discusses:
  • Overview of CAS
  • Sandboxing and trust levels
  • Developing hosts and frameworks
  • AppDomains and security
This article uses the following technologies:
.NET Framework 2.0, Visual Studio 2005

Contents

Why CAS?
Understanding Sandbox Permissions
Hosts and Frameworks
AppDomains and Security
Defining the Sandbox
How to Host
CAS and Frameworks
Security-Transparent Code
Using Transparency
Conclusion

Security in the Microsoft® .NET Framework encompasses many technologies: role-based security in the base class libraries (BCL) and in ASP.NET, cryptography classes in the BCL, and new support for using access control lists (ACLs) are just a few examples. One of the technologies in the .NET security spectrum provided by the common language runtime (CLR) is code access security (CAS). This article discusses the role of CAS in .NET security and some key new features and changes in CAS in the .NET Framework 2.0.

Most developers using the .NET Framework don't need to know much about CAS except that it exists. As with floating point numbers, CAS is a feature of the Framework about which intricate and detailed knowledge can be very useful for some applications but that most can largely do without.

Why CAS?

When people ask me about access control, they often want to know more about role-based security—controlling access to resources based on user identity. CAS can be difficult to understand because it is not based on user identity. Instead, it is based on the identity of the code that is running, including information such as where the code came from (for example, from the local computer or from the Internet), who built it, and who signed for it. The actions the code can perform and the resources the code can access are restricted based on this "evidence" associated with the code and its identity.

Why would you want to restrict the things code can do based on its identity? Often you don't. For example, you may run a fancy and expensive graphics editing program that can access any resource on the computer that you, as the user running it, can access (files, registry settings, and the like). In this case, you know and trust the publisher of the software and are willing to allow it a high level of access to the platform on which it is running. While you typically want an app to run with the least-privileged user account necessary in order to prevent accidental corruption of the computer, trust of the application itself is not an issue in this case.

However, there are situations in which you run code and either do not know or do not fully trust its author. The most common example of this today is in browsing the Web. Modern browsers often run code from the Web sites they visit. This code is usually JavaScript or DHTML, but it could be some other form of executable recognized by your browser. When browsers run this code on your computer, they do it in a sandbox—a restricted execution environment that controls which resources the code can access on your computer. This sandbox is a necessity; otherwise, relatively anonymous code from the Internet would be able to read your private files and install viruses on your computer every day. Many browsers offer different levels of sandboxing based on the origin of the code that is running. For example, Microsoft Internet Explorer defines the concept of zones. Code from the Internet zone cannot do as much as code from the Trusted sites zone. Security exploits occur when either a bug exists in the software enforcing the sandboxing rules or the user is tricked into approving code for execution outside the sandbox.

Running code from a Web site in your browser is a common scenario for sandboxing and code identity-based security. CAS in the CLR provides a general mechanism to sandbox code based on code identity. This is its main strength and the main scenario for its use. CAS is used today by Internet Explorer to sandbox managed code from Web sites running in the browser (either controls or standalone apps). It's used to sandbox applications running from a network share on your local intranet. It is also used by Visual Studio® Tools for Office (VSTO) in order to host managed add-ins for Microsoft Office documents. In server scenarios, CAS is used by ASP.NET for Web applications and by SQL Server™ 2005 for managed stored procedures. CAS pops up in lots of scenarios, usually behind the scenes.

Understanding Sandbox Permissions

Developers often ask what they need to know about CAS. If you're an app developer, the answer depends on whether you are writing an app that will run in a sandbox, such as a managed control in Internet Explorer. If so, you need to know the following:

  • What you can do in the sandbox
  • Whether that is enough functionality for the application to run successfully
  • How to elevate the trust of the application to get more permissions if necessary

For example, managed controls in the browser use a default set of sandbox permissions: the Internet Permission Set. In terms of resource access, this sandbox allows the app to create "safe" user interface elements (transparent windows are considered unsafe, for example, as they could be used to commit spoofing or man-in-the-middle attacks), make Web connections back to its site of origin, get access to the file system and printers with user consent, and store a limited amount of data in isolated storage (similar to cookies for Internet Explorer). It does not allow an app to read random parts of the file system or registry, or environment variables. Nor does the sandbox allow the app to create connections to a SQL Server database, or to call COM objects or other unmanaged code. Different hosts may define different sandboxes. For example, SQL Server and ASP.NET each define a different set of permissions for their low-trust apps. Figure 1 outlines these sandbox permissions.

Figure 1 Permissions of Various Sandboxes

Sandbox Resource Internet Permission Set LocalIntranet Permission Set ASP.NET Minimal Trust Permission Set
Execute code Yes Yes Yes
User interface Safe top-level windows Unrestricted None
File system Isolated Storage: isolated by user with a quota.
File Dialog: can access data in files chosen by the user.
Isolated Storage: isolated by user but no quota.
File Dialog: can access data in files chosen by the user.
None
Network Can make connections to site of origin only. Can make connections to share or site of origin only. Can access DNS. None
Printing Can print only if allowed by the user. Programmatic access to the default printer and to other devices if allowed by the user. None
Environment None USERNAME environment variable. None
Reflection Public reflection. Public reflection and can emit types. None
Registry None None None
Database None None None
Win32 APIs or COM objects None None None

Once you know the options for a sandbox, you can decide whether a sandbox app is the right choice for your needs. For example, if you're writing a managed control in Internet Explorer and are planning to connect to a SQL database, your control would not be able to run in the sandbox by default. You would either have to change your plans (for instance, by accessing data using a Web service deployed on the control's server), or elevate the trust level of your control on the client computer. For managed controls in the browser, elevating trust involves deploying CAS policy to trust both the site the control comes from and the control itself. Other hosts have different mechanisms for trusting apps.To change the trust level of an ASP.NET app, you must change a setting in a configuration file.

In some cases, your app must run in the sandbox because it is unlikely your users would install and run it any other way. Suppose you're writing a simple survey control for a Web site. The control asks the user a few questions and records their responses. It is unlikely most people would download and run a full-trust application for something so simple. However, there are some cases in which you're writing a trusted app, but still benefit from having it run in the sandbox. Figure 2 shows the levels of trust with and without sandboxing. And if your application must run with full trust, make sure it still behaves well in partial-trust environments (see the sidebar "Handling Partial Trust Gracefully" for more information).

Figure 2 Sandbox Trust Scenarios

  Sandbox No Sandbox
Trusted Typical benefits include easier installation, better reliability, lower total cost of ownership. Typical scenario for client code—app requires a high level of access to the platform.
Not trusted You get better security—a safe way to run low-trust code. A dangerous scenario—code should not run.

Typically, it's much easier to deploy a sandbox application. With ClickOnce, for example, sandbox applications just run and do not require any action from the user, while high-trust applications must either be signed with a trusted certificate or show a prompt to the user (for more information about ClickOnce, see ClickOnce: Deploy and Update Your Smart Client Projects Using a Central Server and Escape DLL Hell: Simplify App Deployment with ClickOnce and Registration-Free COM).

You may also get reliability benefits from sandboxing applications. For example, running as many managed stored procedures as possible in the lowest trust level of SQL Server means there's less code that has full access to the server and, therefore, less code that can damage the server (whether maliciously or accidentally).

Visual Studio 2005 provides much improved support for building sandbox client applications to be deployed using ClickOnce. The Security pane of the project properties page is where most of the action takes place (see Figure 3). Here, you enable security settings and set the zone you're targeting for your application. For a sandbox app, you'd typically choose the Internet zone. You can then use the Calculate Permissions button to give you a rough estimate of the permissions required for your application, and whether it fits in the sandbox (this calculation functionality is also available from the permcalc command-line tool included with the .NET Framework 2.0 SDK).

Figure 3 Visual Studio 2005 Security Pane

Figure 3** Visual Studio 2005 Security Pane **

It's important to understand that this is a rather rough, usually conservative, estimate from purely static analysis. You should always test your application with the permissions with which it will run. You do this by simply debugging the application after setting the target zone. Visual Studio will run the application with the permissions you specified on the security pane.

Hosts and Frameworks

If you're writing hosts or frameworks, you need to know much more than just the basics about the sandbox. Host developers need to know how to host low-trust code. Framework developers need to know how to write frameworks or libraries that expose functionality to low-trust code.

However, before I get into those issues, I want to discuss a topic of common confusion—running with least privilege. Running with least privilege refers to running code with the fewest privileges or permissions required to get the job done. This limits the damage that can be caused if there is an exploitable bug in the code. For Windows® security, running with the least-privileged user identity possible (for example, a normal user instead of an admin user) is always a good idea. For CAS, running with least privilege means running with the minimal set of CAS permissions required for an application or library to perform its tasks. While running with least privilege is generally a good thing, it does have its limits. For example, almost all host or framework code must run with some powerful privileges that are outside of the default sandbox. This code frequently has to call Win32® APIs or other unmanaged code and control resources on the machine such as files, registry keys, processes, and other system objects. Since this kind of code already requires elevated permissions, trying to remove a permission or two is generally not worth the effort from a security point of view, given that many powerful permissions can be elevated to full trust.

Your time is better spent auditing and testing the code to make sure it is secure in the face of hostile callers. (In the context of CAS, running with full trust means the code can do whatever the user running it could do on the system. If the process running full-trust code does not have admin privileges, even full trust code does not have unlimited access to the machine's resources.) In order to make that easier, the .NET Framework 2.0 includes a new technology called security-transparent code, which I'll discuss later in this article with other framework development issues. But first, let's get back to hosting.

In general, hosting means using the CLR to execute code in the context of another application. For example, SQL Server 2005 hosts the runtime to execute stored procedures written in managed code. I'm going to focus here on the aspects of hosting relevant to security, starting with a review of AppDomains. There is a lot more than that to hosting, however—see Stephen Pratschner's book Customizing the Microsoft .NET Framework Common Language Runtime (Microsoft Press®, 2005) for further details.

AppDomains and Security

AppDomains provide sub-process isolation for managed code. This means each AppDomain has its own set of state. Verifiable code in one AppDomain cannot tamper with code or data in another AppDomain unless interfaces created by the hosting environment allow them to interact. How does this work? Verifiably type-safe code (produced by default by both the C# and Visual Basic® .NET compilers) cannot access memory in arbitrary ways. Each instruction is checked by the runtime using a set of verification rules to make sure it accesses memory in a type-safe way. Therefore, the runtime can guarantee the isolation of AppDomains when running verifiable code, and it can prevent code from running if it's not verifiable.

This isolation allows a host to run code with different trust levels in the same process safely. Low-trust code can run in a separate AppDomain from either trusted host code or other low-trust code. The number of AppDomains needed for hosting low-trust code depends on the isolation semantics you want for your host. For example, Internet Explorer creates one AppDomain per site for managed controls. Controls from one site are able to interact in the same AppDomain, but cannot interfere with (or take malicious advantage of) controls from another site.

Handling Partial Trust Gracefully

By default, managed applications that run from Internet Explorer—either managed controls in the browser or no-touch deployment applications—run with a reduced set of CAS permissions. You can change CAS policy, however, to run specific trusted applications with more permissions. For example, you could trust a specific strong name when assemblies signed with it come from a specific URL. The .NET Framework Configuration Tool creates Microsoft Installer (MSI) files to do this. This security policy update would have to be installed on any machine running your application, either by instructing a user to run the MSI file themselves or having the MSI file deployed across your enterprise by your IT department.

If you write a managed application that runs from the browser but requires elevated permissions, you should check to make sure that your app or control gets the permissions it needs to run, and fails gracefully if it does not.

Why wouldn't your app have enough permissions? Perhaps a user forgot to install the security policy update before running your application. There are also cases when installing a new version of the .NET Framework would require the user to install updated security policy. For example, controls in the browser will always run against the latest version of the .NET Framework installed on the machine. CAS Policy, however, is specific to each version of the Framework. So if the user installed a policy update for version 1.1 of the Framework, and then later installed version 2.0, their control will now run against 2.0 and will not see the changes in policy applied to 1.1.

Making sure your application has the permissions it expects and helping the user fix the problem if it does not can help ensure that your controls will run in the future. You should make this permission check early in the life of your program. For example, if your application expects to run with full trust, you would add a method to your main form like the one shown in Figure A.

You would then change your application's Main method to something like the following:

[STAThread]
static void Main()
{
   if (CheckApplicationPermissions())
   {
      Application.Run(new Form1());
   }
}

The Web page to which you send the user for help could detect the latest version of the runtime installed on the machine (via the user agent header in the HTTP request) and provide the user with a policy update specific to their version of the runtime.

ClickOnce in the .NET Framework 2.0 makes this situation simpler for standalone apps. A ClickOnce application states its permission requirements in its manifest, and the application either runs with those permissions or does not run at all. So this permission-checking code is not necessary in a ClickOnce app, but is still useful for controls in the browser.

Figure A Checking For Full Trust

private static bool CheckApplicationPermissions()
{
   try
   {
      // See if we're running in full trust
      new PermissionSet(PermissionState.Unrestricted).Demand();
      return true;
   }
   catch (SecurityException)
   {
      try
      {
         // Not running in full trust, help the user
         MessageBox.Show("Application error: please go to" +
            "https://contoso.com/fix.aspx for fix instructions.");
      }
         // App doesn't even have permission to show UI, not much to do
      catch (SecurityException) {}
   }
   return false;
}

Figure 4 AppDomain Firewall

Figure 4** AppDomain Firewall **

CAS allows you to specify the trust level of code as it loads (mapping evidence into permissions via policy). Additionally, CAS allows you to specify the trust level of AppDomains as they are created. In the .NET Framework 1.1, you can do this by setting evidence on the AppDomain. This evidence is mapped to a set of permissions by policy. Whenever the flow of control transitions into the AppDomain, the AppDomain's permission grant is pushed on the stack, just like the permission grants for assemblies, and is considered during any stack walks to evaluate demands from that AppDomain. This mechanism, called the AppDomain firewall (see Figure 4), prevents cross-AppDomain luring attacks and provides yet another isolation mechanism to isolate code between AppDomains.

CAS allows you to load code (assemblies) into an AppDomain with different trust levels (permissions), but this is not the recommended way to securely sandbox code. Given that all code in an AppDomain shares state, it is difficult to prevent elevations of privilege among multiple assemblies within the AppDomain. For example, say you load one assembly with permission to execute only and another assembly with the LocalIntranet permission set. It may be possible for the former (running with only permission to execute) to elevate to the LocalIntranet permission set, unless that code was written with extreme care for handling this scenario. You also lose the benefit of the AppDomain firewall. Thus, as I discussed, it's useful to create multiple AppDomains to host code that should be isolated, even if it is running at the same trust level, such as Internet Explorer does for controls from separate sites.

So instead of loading multiple assemblies into a single AppDomain at different trust levels, I recommend using separate AppDomains for each logical chunk of low-trust code (each site in Internet Explorer, for example), and keeping the security model simple. For each AppDomain, there should be two trust levels:

  • The first trust level is full-trust platform code, which includes the .NET Framework and the host's trusted code that interacts with the low-trust code.
  • The second trust level is low-trust code itself, running at one trust level, rather than mixed trust levels.

Figure 5 AppDomain Trust

Figure 5** AppDomain Trust **

This model is easier to understand and more secure, which is one of the reasons why it is used by ClickOnce. For any ClickOnce app, there is code at two trust levels running in the AppDomain: the platform code running with full trust and the application code running with the trust level specified in the application's manifest. The CLR security system has been optimized for this mode, and as a result code adhering to this model will perform well. See Figure 5 for an illustration of both trust levels.

Defining the Sandbox

Another task in hosting is defining the sandbox, or set of permission, you'll grant to low-trust code. As I mentioned, the CLR defines some permission sets for sandboxes, as do other hosts (such as ASP.NET). The two relevant permission sets defined by the CLR are the Internet permission set and the LocalIntranet permission set. The Internet permission set is designed to be safe for running anonymous, potentially hostile code. It has been audited by Microsoft for this purpose and is the only set of permissions thoroughly tested for such a scenario. Therefore, it should be your default choice for hosting low-trust code. But while the LocalIntranet permission set will prevent hostile code from taking over a computer, it discloses some information users may not want anonymous code to have, such as the user name of the currently logged-in user. So it is actually more appropriate for somewhat trusted code, such as code on your organization's intranet.

What about hosts that have defined their own permission sets, such as ASP.NET? The ASP.NET medium-trust level, for example, provides security by using both CAS and Windows identity-based security to restrict code. One member of my team coined a good term for this kind of scenario: sand-duning. A "sand dune" is a set of restricted permissions that doesn't by itself prevent hostile code from unauthorized access to computing resources. Instead, it is useful when combined with other security enforcement mechanisms and for keeping honest code honest. It is a common technique for reliability in server scenarios wherein multiple apps are sharing computing resources and you want to prevent one poorly behaved app from being able to take down the server.

How to Host

How do you actually make all this happen? In the .NET Framework 1.1, you create an AppDomain, making sure to pass in evidence. This evidence will be used to compute the permissions for the AppDomain. Then you set up an AppDomain policy level, which should grant full trust to platform and host code, and grant low trust to all other code. Next, you set the AppDomain policy level on the AppDomain. Finally, you call the trusted host code in the new AppDomain to bootstrap running low-trust code.

In the .NET Framework 2.0, this process is simplified:

  1. Use the new simple sandboxing API to create an AppDomain. This sets the AppDomain trust level, the sandbox permission set, and the list of trusted host assemblies all in one step, without creating an AppDomain policy level.
  2. Call trusted host code in the new AppDomain to bootstrap running low-trust code.

For all the details on this, take a look at the Shawn Farkas article in this issue. Shawn explains how to do this and discusses all the techniques for doing it securely. While the simple sandboxing model is sufficient for many scenarios, you can have more control over the creation of AppDomains through the AppDomainManager class. Shawn's article talks about how you can use AppDomainManager to implement custom policy behavior, if you need it.

CAS and Frameworks

Now, I'm going to shift gears and discuss another developer scenario in which CAS is relevant—building frameworks. A framework is a library of reusable classes designed to be used by other applications. It can be one assembly with a few types, or a large set of assemblies with many types, like the .NET Framework. Frameworks are usually deployed in the Global Assembly Cache (GAC) so they can be used by multiple applications. They are platform components that run with full trust to take advantage of all that the system offers. This topic is also relevant to host developers for they usually must build at least a limited framework for hosted code to interact with, if not a full-fledged framework.

Any time a framework developer wants to expose functionality to code that does not have full trust, they must be aware of CAS. One example is building a sound library. You could define a custom permission, say SoundPermission, and demand it when your method to play a sound is called. If the demand succeeds, you would assert permission to call unmanaged code and call the Win32 API to play the sound.

An alternative and more common scenario is to interact with existing system permissions in building a framework. For example, let's say you're implementing a math library that is accessible from low-trust code. If your library only does calculations and does not access any system resources, it can largely ignore CAS. But suppose in the initialization of your math library, you read some environment variables to help you decide how to optimize your computations. You'll need to audit the code to make sure it is safe to call in the context of low-trust code. The call must be completely opaque to the low-trust code, meaning the low-trust code cannot control which environment variables are checked and cannot get the values of the environment variables. In order to do this successfully in a low-trust context, you will have to assert environment permissions to read the environment variables.

An assert is an elevation of privilege. When framework code does an assert, it is going to perform an operation the caller may not normally have permission to do (the framework must already have the permission, however, as well as the permission to perform an assert). Whenever framework code does an assert, that code must be audited carefully to make sure the assert is safe. This typically means checking the parameters passed in to make sure they've been validated and canonicalized where appropriate, making sure no inappropriate data is leaked back to low-trust code, and checking that low-trust code cannot inappropriately alter the state of high-trust code to trick it into doing something unsafe later on (known as a luring attack). Asserts are usually pretty easy to spot. You can search your source code for them, and there are a few FxCop rules related to asserts.

A more subtle and therefore potentially more dangerous elevation of privilege is satisfying a link demand. When a method has a link demand, its immediate caller is checked when the method is just-in-time (JIT) compiled. So, in effect, it is a one-level demand. It can be dangerous in that trusted code can satisfy a link demand, then turn around and expose that functionality to low-trust code without realizing it satisfied a link demand. One strategy trusted code might try in order to ensure security is to avoid doing any asserts and just let all demands flow to callers. But with link demands, trusted code is doing an implicit assert. To be fully secure, the trusted code would have to convert all link demands to full demands, as well. While there are some FxCop rules that help find code that satisfies link demands, you can't visually audit code to see if it satisfies any link demands. This makes auditing code more error prone.

Security-Transparent Code

Transparency, a new feature in the .NET Framework 2.0, is designed to help developers of frameworks write more secure libraries that expose functionality to low-trust code. You can mark an entire assembly, just some classes in an assembly, or just some methods in a class as security-transparent. Security-transparent code cannot elevate privilege. This has three specific ramifications:

  • Security-transparent code cannot perform asserts
  • Any link demand that would be satisfied by security-transparent code turns into a full demand
  • Any unsafe (unverifiable) code that must execute in security-transparent code causes a full demand for the skip verification security permission

These rules are enforced during execution by the CLR. Basically, security-transparent code passes all of the security requirements of the code it calls up to its callers. Demands just flow through it, and it cannot elevate privilege. So if a low-trust app calls some security-transparent code that causes a demand for high privilege, the demand will flow up to the low-trust code and fail—the security-transparent code could not stop the demand even if it wanted to. The same security-transparent code called from full-trust code will result in a successful demand.

There are three attributes used in transparency, summarized in Figure 6. When using transparency, you factor your code into security-transparent and security-critical (the opposite of security-transparent) methods. The bulk of your code that handles data manipulation and logic can usually be marked as security-transparent, while the small amount of your code that actually performs the elevations of privilege will be marked as critical. So far, groups within Microsoft that have adopted transparency have been able to mark upwards of 80 percent of their code as security-transparent. This has allowed them to focus auditing and testing efforts on the 20 percent of their code that is security-critical. For backward compatibility, all code in the .NET Framework 2.0 and previous versions that is not annotated with transparency attributes is considered to be security-critical. For now, you have to opt in to transparency. There are also FxCop rules for transparency, to help developers make sure they get the rules of transparency right when initially building their code, rather than having to debug a runtime error later.

Figure 6 Transparency Attributes

Attribute Description
SecurityTransparent Only allowed at the assembly level. Opts the assembly into transparency and marks all types in the assembly as security-transparent. The assembly cannot contain any security-critical code.
SecurityCritical When used at the assembly level, opts the assembly into transparency, and marks all code in the assembly as security-transparent by default, but signifies that the assembly may contain security-critical code too. When used at the class or method level, marks the class or method as security-critical. That class or method can now perform elevations of privilege.
SecurityTreatAsSafe Can be placed on private or internal security-critical members to allow security-transparent code within the assembly to access the members. Otherwise, security-transparent code cannot access private or internal security-critical members within their assembly, as doing so may influence security-critical code and make unexpected elevations of privilege possible.

The reason for the SecurityTreatAsSafe attribute may not be obvious at first. Think of the security-transparent and security-critical code within your assembly as actually separated into two assemblies. The security-transparent code would not be able to see the private or internal members of the security-critical code. Additionally, the security-critical code is generally audited for access to its public interface. You would not expect private or internal state to be accessible outside of the assembly—you would want to keep the state isolated. So to allow the isolation of state between security-transparent and security-critical code while still providing the ability to override when necessary, the SecurityTreatAsSafe attribute was introduced. Security-transparent code cannot access private or internal members of security-critical code unless those members have been marked with SecurityTreatAsSafe. Before adding SecurityTreatAsSafe, the author of critical code should audit that member as though it were being exposed publicly.

Using Transparency

I'll use the math library example to demonstrate the syntax for transparency. First, to opt in to transparency, you have to add the SecurityCritical attribute as an assembly-level attribute:

using System.Security;

// opt in to low-trust callers
[assembly: AllowPartiallyTrustedCallers]
// opt in to transparency, but some code will be critical
[assembly: SecurityCritical]

Now all types in the assembly will be security-transparent by default. But let's say the constructor for the math library class needs to assert to read an environment variable, while all the other methods in the library can be security-transparent. The class definition would look something like Figure 7.What Else is New in CAS?

Transparency and the simple sandboxing model for hosts are two of the biggest investments in CAS in the .NET Framework 2.0. But there are some other changes in CAS you should be aware of, as well.

The GAC is Full Trust

Assemblies in the GAC now always get full trust. This cannot be changed in policy or overridden by an AppDomainManager. Assemblies in the GAC are part of the trusted platform—you need admin rights to install an assembly in the GAC. Platform assemblies generally must run with full trust to ensure the proper functioning of the system. Trying to run them in a sandbox or reduce their privileges does not make sense and can lead to platform instability. The .NET Framework 1.0 and 1.1 provided the full-trust list, which had to include any assembly participating in policy, so most GAC assemblies were already getting full trust regardless of policy. Rather than having to know about both the full-trust list and the GAC, a framework developer only has to install their framework in the GAC now.

Identity Permission Demands are Satisfied by Full Trust

In the .NET Framework 1.0 and 1.1, you could use an identity demand, for instance for a StrongNameIdentityPermission, to create the appearance of being able to restrict the callers of your code based on identity. These demands would fail even if the calling code had full trust. The problem with this feature is that it is trivial to circumvent. A full-trust caller can explicitly load one of its assemblies with the strong name evidence of your strong name and then call your supposedly protected assembly. Given that this feature offers no security against full-trust callers, the behavior was changed so that full-trust callers will satisfy identity demands. If you are looking for some way to have "semi-internal" APIs that are shared across libraries your code controls, Take a look at the new InternalsVisibleTo attribute. It provides this functionality with none of the security implications.

You Cannot Turn off CAS Permanently

In the .NET Framework 1.0 and 1.1, you could completely turn off CAS enforcement by executing CASPol.exe –s off at the command line. While this is handy for some testing scenarios, it was easy to forget you had turned off CAS and thus not remember to turn it back on. This would leave your machine vulnerable to a number of attacks—any low-trust code that ran on your machine would have no restrictions on what it could do. For .NET Framework 2.0, CASPol.exe –s off does not permanently turn off security. Instead, when you execute this command, CASPol.exe will pause and wait for you to press a key. While CASPol.exe is paused, security is off. As soon as you press a key and CASPol.exe finishes and exits, security is back on.

Figure 7 Security-Critical MathLibrary

public class MathLibrary 
{
   private string m_procArch;

   // the constructor is critical
   [SecurityCritical]
   public MathLibrary()
   {
      new EnvironmentPermission(EnvironmentPermissionAccess.Read,
         "PROCESSOR_ARCHITECTURE").Assert();
      m_procArch = Environment.GetEnvironmentVariable(
         "PROCESSOR_ARCHITECTURE");
      CodeAccessPermission.RevertAssert();
   }

   // the rest of these methods are transparent

   public float ComputeFutureValue(float presentValue,
      float interestRate, int termInMonths)
   {
      ...
   }

   public int ComputeMedian(int[] values)
   {
      ...
   }
}

Transparency allows you to build frameworks in which most of the code runs in the context of its caller, while explicitly marking the code that can elevate privilege. This allows you to focus your security efforts on the code that is most sensitive security-wise, which is important as your codebase grows, and also makes code exposed to low-trust callers cheaper to build and maintain.

Conclusion

I've discussed the many possibilities for default sandboxes. Developers who want to host low-trust code or to extend the platform have to know more about CAS. New features have been added to the .NET Framework making it easier to host low-trust code and extend the platform more securely. As managed code spreads to more and more places, CAS continues to evolve, making it easier to sandbox low-trust code and build a more secure platform (for a look at other changes to CAS in the .NET Framework 2.0, see the sidebar "What Else is New in CAS?"). CAS for the .NET Framework 2.0 is designed to enable more developers to take advantage of these scenarios and enable more secure extensibility.

Mike Downen is the program manager for security on the CLR team. He works on Code Access Security, the cryptography classes, and the ClickOnce security model. You can read Mike's blog and contact him at blogs.msdn.com/CLRSecurity.