Export (0) Print
Expand All
Debugging: Root Out Elusive Production Bugs with These Effective Techniques
Smart Tags: Simplify UI Development with Custom Designer Actions in Visual Studio
Ten Essential Tools: Visual Studio Add-Ins Every Developer Should Download Now
XML Comments: Document Your Code in No Time At All with Macros in Visual Studio
Expand Minimize

An Introduction to Code Access Security

Visual Studio .NET 2003
 

Keith Brown
Pluralsight

January 2006

Applies to:
    Microsoft Visual Studio .NET
   Code Access Security (CAS)

Summary: The .NET deployment model is based on clients pulling the latest version of an app from a Web server. While this eliminates a lot of headaches, how is a client to know the code is secure? Keith Brown explains. (11 printed pages)

Contents

Introduction
Trust Levels
The Big Picture
Evidence
Permissions
Policy
Inside .NET Security Policy
Conclusion

Introduction

The Microsoft .NET Framework is a great platform for developing and deploying smart clients. The .NET deployment model is based on clients pulling the latest version of an app from a web server, which eliminates a lot of headaches. However, this introduces the potential for a client to download malicious code. How is a client to know the difference?

To deal with this, the .NET Framework introduced a security system called Code Access Security (CAS). CAS helps centralize trust decisions and introduces the notion of partially trusted code, which can be run with reduced permissions. I'll start by introducing some concepts and painting the overall picture of what CAS is intended to do as well as how it hangs together, and finally I will drill down a bit to dispel a little of the magic.

Trust Levels

Before the .NET Framework existed, Windows had two levels of trust for downloaded code. When browsing the web, you probably remember seeing dialogs like the one shown here:

Figure 1

There are two choices here: Yes and No. They represent levels of trust in the code you're about to install and run. If you choose No, the code won't run. If you choose Yes, the code will run with all the permissions you currently have based on your user login. Like most people, you're probably a member of the Administrators group, which means the downloaded code can do anything it wants to your machine.

This old model was a binary trust model. You only had two choices: Full Trust, and No Trust. The code could either do anything you could do, or it wouldn't run at all.

In the managed world, you still have these two options, but the CLR opens up a third option: Partial Trust. When you use partially trusted code, it will be allowed to execute, but it will be constrained by the .NET Framework and won't necessarily be able to do all the things you can do. In fact, there are a whole raft of permissions that control exactly what the code is allowed to do, and as you'll see shortly, these permissions can be granted and revoked using .NET security policy.

The most important concept to understand at this point is that partial trust grants a set of permissions that will always fall somewhere between no trust and full trust, where no trust means the code cannot run at all, and full trust means the code can do anything the user running it would normally be allowed to do. Fully trusted code run by an administrator can do administrative tasks. Fully trusted code run by a normal, nonprivileged user cannot do administrative tasks, but can access any resources the user can access, and do anything the user can do. From a security standpoint, you can think of fully trusted code as being similar to native, unmanaged code, like a traditional ActiveX control.

Figure 2

So when should you choose to run your code with partial versus full trust? Well, as a developer this decision is not in your hands. Code Access Security was not introduced to protect applications from users. Its main goal is to protect users from potentially malicious applications. Remember, I'm talking about code that's downloaded over the network here. Locally installed applications are fully trusted by default.

System administrators ultimately control security policy. If you want users to be able to run your smart client application from the network without having to install it on their machines, you'll either need to convince the system administrator to tweak security policy to allow your code to run with full trust, or you'll need to write your code carefully so that runs properly with partial trust. Doing the latter requires a little bit of learning and a lot of patience, which is why I'm dedicating another upcoming article to the topic of writing partially trusted code.

The Big Picture

Before diving into the details of how Code Access Security works, let's stand back and look at the big picture. When an assembly is loaded, the CLR gathers evidence about that assembly. This includes the download location and might also include information about who authored the assembly.

The CLR feeds this evidence into its security policy engine, which decides which permissions to grant to the assembly based on the evidence. Security policy is controlled by administrators and users on the machine on which the code will run. The result is a permission set, which the CLR attaches to the assembly. These permissions will help decide what the code in the assembly can or cannot do.

Now, how are these permissions enforced? Stop and think for a moment about the managed code that you write. If you want to send an e-mail, write to a file, call a Web service or a stored procedure in a database, or even do something simple like reading an environment variable, what do you do? You use a class that's provided by the .NET Framework. It's these classes that gate access to sensitive resources like the file system, network, databases, and so on. If security policy didn't grant your assembly the permission to write to a particular file, the FileStream class will throw a SecurityException if you try to open that file for writing.

So what's to stop you from simply going around the .NET Framework classes and calling directly to a Win32 function; say, CreateFile? Well, in order to do this, you must use the .NET Framework's interop layer, and that layer will throw a SecurityException if you haven't been granted permission to use interop. This is the type of permission that's only granted to fully trusted code. You won't be able to use P/Invoke or COM interop in a partially trusted assembly.

Oh, and since partially trusted code is also required to be typesafe (it must be verifiable or it won't run), you can't use pointers to get around the .NET Framework classes and call to native code directly. In short, partially trusted code is sandboxed and restricted by .NET security policy. Fully trusted code has none of these restrictions.

Now let's drill down a little deeper and look at the three components of Code Access Security: evidence, permissions, and policy.

Evidence

As with just about all aspects of Code Access Security, there are classes that represent each form of evidence, and you can write your own if you have specialized needs. Here are the most common types of evidence you'll encounter in the .NET Framework version 1.1:

Figure 3

These are real classes that you can find in the System.Security.Policy namespace, and I've categorized them to help you understand where they come from. Before the CLR even downloads an assembly, it's got to have an URL to find the assembly in the first place. This will typically be a file:// URL or an http:// URL, depending on whether the assembly is installed on the local machine, or being loaded from the network.

The CLR computes zone and site evidence from the URL, as well. The former is simply the Internet Explorer zone that the URL belongs to. For example, file://c:\temp\myapp.exe is in the MyComputer zone, while https://www.xyz.com/utility.dll will most likely fall into the Internet zone. I say "most likely" because zones may be customized. For example, if www.xyz.com happens to be a website that you trust, you may have added it to your list of trusted sites via the Internet Options control panel, in which case the zone evidence would be Trusted.

Figure 4

For http:// style URLs, site evidence is also computed. In my example, the site would be www.xyz.com.

After downloading the assembly, the CLR examines its contents to determine hash, publisher, and strong name evidence. The hash value of an assembly is the SHA1 or MD5 hash of the assembly manifest, which contains hashes of each module making up the assembly. For all practical purposes, if the assembly changes (even if it is simply recompiled), its hash value will change.

Some authors sign their assemblies using Authenticode. In this case the CLR will produce publisher evidence that contains the code-signing X.509 certificate. Because of the public key infrastructure behind these certificates, publisher evidence is a reasonably secure way of making security policy decisions. If a publisher's private key is compromised, she can report it to her certificate authority who will publish a new Certificate Revocation List (CRL). This means that over time, as users download assemblies signed with the compromised key, the CLR will recognize that the publisher's certificate has been revoked and won't run any assemblies signed by the revoked key.

Most assemblies also have strong names, which will be packaged into evidence as well. Beware using strong name evidence in security policy however, as there is no key revocation infrastructure as there is with certificates.

Permissions

The .NET Framework version 1.1 defines a whole host of permissions, protecting everything from file and database access to thread suspension and resumption. Just like evidence, each permission is represented as a class, and you can define custom permission classes if the need arises. To give you a taste, here are the permission classes defined in System.Security.Permissions:

Figure 5

The identity permissions should look familiar, as they map directly onto the corresponding evidence. These are typically used to restrict which code can use your classes or methods, which only really works in partially trusted scenarios. This is a more advanced topic that I won't drill into any further here.

The resource permissions are the ones you'll most likely be interested in. These permissions control which classes in the .NET Framework you can use, and sometimes even which methods or properties you can use on those classes. Ultimately this controls which resources your code has access to.

Most of these permissions have parameters. The UIPermission class has parameters that control what type of windows you can draw in, as well as whether you'll be allowed to read the clipboard or not. The SecurityPermission class has a load of flags that control things as varied as whether your code can suspend threads to whether it can call through interop and get to native code directly.

Here's an example. Say your assembly is granted FileIOPermission to read the path "c:\temp\*". This means your code can read any file under c:\temp, including files in subdirectories (or more specifically, any file that the user running your code is allowed to read based on her login). And you can do this silently, without prompting the user, because your calls to File.Open() or the FileStream constructor will succeed.

But if your assembly is partially trusted, it's much more likely that it won't be granted FileIOPermission. If this is the case, trying to open the file directly, using File.Open() or by creating a new FileStream object, will generate a SecurityException, because those operations demand that you have FileIOPermission.

Your partially trusted assembly may only be granted FileDialogPermission for reading files. In this case, you may use the OpenFileDialog class to prompt the user to pick a file. You may then use the OpenFile method on OpenFileDialog to get your hands on a read-only FileStream. This way, lesser-trusted assemblies may open files if they are willing to get the user involved.

It's interesting to note that the FileName property on the OpenFileDialog class demands that you have FileIOPermission, because it discloses the location and name of the file the user picked. If you only have FileDialogPermission, you're only allowed to read the contents of the file, not to discover where it came from! These little gotchas can be a bit frustrating, which is yet another reason to follow up with the article on writing partially trusted code.

Policy

Earlier I mentioned how the CLR's security policy engine examines evidence and constructs a permission set for an assembly at load time. No one policy will fit all organizations, so the policy engine reads a set of XML files that contain permission grants. These files make up what is called .NET security policy.

Fortunately you don't have to edit these files by hand. The .NET Framework Configuration administrative tool makes editing policy pretty straightforward, and has several wizards to make the process easier for people who are new to .NET. But before I start drilling down into policy, let's talk about the default policy that ships with the .NET Framework, as you're guaranteed to run into it at some point.

The default policy works on the premise that code installed on your machine is more trusted than code that is loaded from the network. And, of course, there is a provision ensuring assemblies in the .NET Framework itself are always fully trusted. After all, some piece of code has to be trusted to implement this whole security infrastructure.

To keep things simple, Microsoft's default policy grants permissions based on zone evidence. The MyComputer zone is granted full trust. This represents locally installed code such as applications you've installed from a CD-ROM or DVD-ROM, or programs that you have manually downloaded and saved to your disk before running.

Next is the LocalIntranet zone. If you don't explicitly list which Web sites are part of this zone, it's simply defined as any domain name without a "." in the name (in other words, it's a NETBIOS hostname as opposed to a DNS name). For example, http://xyz would be considered part of the LocalIntranet zone by default, while http://www.microsoft.com would normally drop into the Internet bucket because of the dots in the name, as would http://207.46.250.119. Note that the decimal equivalent http://3475962487 will not be accepted—you can read more about the "Dotless-IP Address" bug in Michael Howard's book, "Writing Secure Code, 2cnd ed."

Default security policy assigns what you might call a "medium trust" permission set to assemblies in the LocalIntranet zone. This means code from your local network will normally run with partial trust. If you've ever tried running a managed executable from a shared drive, even one on your local machine (like z:\MySmartClient.exe), you may have run into a SecurityException or two, because the zone you're running from is no longer the MyComputer zone.

Internet Explorer defines a couple of zones designed for customization by system administrators: the Trusted Sites and Restricted Sites zones. The former is granted a "low trust" permission set, and the latter is not trusted at all, which means managed code from a restricted site will not run by default.

And, finally, the Internet zone is the bucket into which all URLs fall if they can't be sorted into any of the other zones. The default assignment for this zone has had a history of change, oscillating between low and no trust, but as of version 1.1 of the .NET Framework, it has stabilized and is mapped to the low trust permission set by default.

Inside .NET Security Policy

To begin to understand how policy works, you should spend some time experimenting with the .NET Configuration tool, which you'll find under the Administrative Tools folder on any computer that has the .NET Framework installed. Let's start by looking at the default machine policy, which I've shown below:

Figure 6

Because there are so many different permissions, and each permission has so many parameters, there is a folder called Permission Sets where you can construct sets of permissions that you'll commonly use. Default policy is already organized this way, as you can see. There are four built-in permission sets that represent the four default levels of privilege granted to code that I mentioned earlier: Full trust, medium trust, low trust, and no trust. Here are how they map onto the actual names in the policy:

  • FullTrust: this one is pretty obvious.
  • LocalIntranet: this is the "medium trust" permission set.
  • Internet: this is the "low trust" permission set.
  • Nothing: this is "no trust," no permissions at all.

By the way, don't let these names confuse you—it's an unfortunate historical artifact that the middle two are called LocalIntranet and Internet. It would be less confusing if they were named something more generic like MediumTrust and LowTrust.

You can see where these four levels come into play if you run some of the wizards provided by the .NET Configuration tool. For example, if you click on the Runtime Security Policy node (this is not shown in my previous example, but you'll see it if you run the tool), you'll see a list of tasks you can perform. One of them is called Adjust Zone Security, and I've shown the interesting part of this wizard below (note the slider bar with four levels):

Figure 7

In this screenshot, I've selected the Internet zone, and you can see that the slider bar is in the unlabeled second position, or what I call the "low trust" position. If you click around on the different zones, you'll see the slider go up and down depending on the zone; for example, the "My Computer" zone should be FullTrust by default. Pay attention to the description of the trust levels as the slider moves. This will give you a bit of a feel for what sorts of restrictions are in place at the various trust levels. If you're wondering why the phrase "might not be able to" is used a lot in these descriptions, bear with me and I'll show you soon!

Here's an educational experiment. Change the permissions assigned to the My Computer zone by dropping the slider bar all the way down "No Trust." Don't worry, your machine won't melt if you do this! After you make this change, try compiling and running a simple application like the following:

class DoesNothing {
    static void Main() {}
}

Running this code should cause you to see a PolicyException that indicates the code is not allowed to execute. However, if you close the .NET Configuration tool (which is a managed application) and rerun it, you'll find that it runs just fine even with your new setting, and you can go back to the wizard and set your policy back to normal. You can also right-click the Runtime Security Policy folder and choose "Reset All" to get back to the default policies after you're done experimenting.

So what exactly is going on here? To explain what's happening, I need to talk a bit about the Code Groups folder in security policy, which is where permissions are actually granted. Each code group is a conditional permission grant. For example, right-click on the code group named My_Computer_Zone, and bring up its property sheet:

Figure 8

You can see that each code group has a membership condition and a permission set. The condition in this case is based on zone evidence. This code group consists of all assemblies in the My Computer zone (in other words, assemblies that have Zone evidence with MyComputer as the zone).

If you click on the permission set tab, you'll see that any code matching this group will be granted the full trust permission set. Each code group is simply a conditional permission grant. If your assembly matches the membership condition, you'll get all the permissions listed in the corresponding permission set. In this case, it's FullTrust.

Now you should be able to figure out what the wizard did earlier: as you moved the slider up and down and made your changes, you were really just changing the permission set for the My_Computer_Zone code group between one of the four I listed earlier: FullTrust, LocalIntranet, Internet, and Nothing.

If you spent some time experimenting, you may have noticed that even with the My_Computer_Zone code group set to grant Nothing, you are still able to run programs like the .NET Configuration tool and Visual Studio .NET. While these programs are not entirely managed, they do load managed code from the local machine that arguably should not run if you've cranked the My Computer zone down to no trust.

The answer to this riddle can be found by drilling a bit deeper into the code group folder (note that I've expanded My_Computer_Zone):

Figure 9

You see, code groups are arranged in a tree. All we were doing with the wizard was changing the permission set associated with the My_Computer_Zone code group, but there are a couple of code groups under that one that still grant permissions, no matter what My_Computer_Zone grants! These are the grants that allowed the .NET Framework and associated tools to function even in the face of radical changes such as the experiment I proposed earlier. We evaluate code groups from the top of the tree (All_Code) down, and as long as any parent node matches (All_Code matches all assemblies), the children will be evaluated. So the Microsoft_Strong_Name, which is on all CLR assemblies and tools, allowed the .NET Configuration tool to run. Because you can edit these code groups directly, the wizard we used earlier that simply changes the zone-based code groups uses the rather vague wording "might not be able to" when it comes to describing restricted actions.

While I don't have time in this article to discuss all the intricacies of policy, you can see that it's really flexible. You can use the code group tree to grant permissions based on different combinations of evidence, and you can use the Permission Sets folder to organize permissions so you're not duplicating effort.

Conclusion

Deploying code over a network is dangerous without a comprehensive security system to verify and constrain that code, and Code Access Security is Microsoft's solution to the problem. It's a flexible beast, if somewhat complex, and as a developer working on smart clients, you should learn all you can about it, as it will play a big role in your life!

 

About the author

Keith Brown is a co-founder of Pluralsight, a premier developer training company, where he focuses on security for developers. Besides writing the Security Briefs column for MSDN Magazine, he authored The .NET Developer's Guide to Windows Security (Addison Wesley, 2004) and Programming Windows Security (Addison Wesley, 2000). Keith also speaks at many conferences, including TechEd and WinDev. Check out his blog at www.pluralsight.com/keith.

Show:
© 2014 Microsoft