This article may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. To maintain the flow of the article, we've left these URLs in the text, but disabled the links.

MSDN Magazine

Security in .NET: Enforce Code Access Rights with the Common Language Runtime

Keith Brown
This article assumes you�re familiar with C# and Microsoft .NET
Level of Difficulty     1   2   3 
SUMMARYComponent-based software is vulnerable to attack. Large numbers of DLLs that are not tightly controlled are at the heart of the problem. Code access security in the Common Language Runtime of the Microsoft .NET Framework addresses this common security hole. In this model, the CLR acts as the traffic cop to assemblies, keeping track of where they came from and what security restraints should be placed on them. Another way the .NET Framework addresses security is by providing preexisting classes which have built-in security. These are the classes that are invoked in .NET when performing risky operations such as reading and writing files, displaying dialog boxes, and so on. Of course, if a component calls unmanaged code, it can bypass code access security measures. This article covers these and other security issues.

C OM components can be incredibly useful. They can also be terribly dangerous. The usual approach to building applications on Windows® revolves more and more around buying third-party COM components, or even DLLs with classic C-based interfaces, and gluing them together into a single process. Granted, careful use of this modular approach can promote reuse, loose coupling, and other long sought after benefits to the software development process, but it often also leads to gaping security holes.
      In the second part of my two-part series on Microsoft Internet Information Services (IIS) security in the July 2000 issue of MSDN® Magazine, I discussed one of the most common and most feared attacks against a software program: the buffer overflow exploit. One silly but incredibly common bug lurking somewhere in a DLL can allow a determined attacker to not only crash the host process, but also to hijack its security context. Gluing a process together from widely available third-party DLLs makes the problem even worse because the attacker has lots of time to discover the various gaps to be exploited in each DLL in use. He has plenty of time to hone his attacks in a safe location before lining your application up in his sights.
      Authenticode® was supposed to help solve this problem, and while it's better than nothing, the problem is that it takes a punitive approach, not a preventive one. Learning the identity of an attacker is little consolation after he's e-mailed lists of links to steamy Web sites to all of your closest friends and relatives with messages displaying your return address. With typical users ready and willing to install any DLL that claims to enhance their online experience, even if these DLLs are legitimately signed by their authors, how can a nonprogrammer determine which DLL actually did the damage? What if the hard drive where the DLL was installed is erased or damaged during the attack? What happens to the evidence? How do you pursue the attacker in court?
      Clearly you not only need to have accountability, but also access control for code—especially mobile code like the ActiveX® components you've come to rely upon. When an administrator runs a process constructed from components, he should be given some assurance that his security context will be protected from rogue components. Confining the buffer overflow problem with verification of managed code is a first step. Code access security, while not a silver bullet, is a significant second step and the main focus of this article.
      Please note this article is based on the technology preview of the common language runtime (CLR) which is subject to change before the final release.

Overview

      First, I'll discuss a basic overview of how code access security works. Use this section as a roadmap for the more detailed discussions that I'll dive into later. For those of you familiar with the security model in Java 2, you'll find that the CLR uses a similar model.
      Generally, when you need to perform security-sensitive operations such as reading or writing files, changing environment variables, accessing the clipboard, displaying dialog boxes, and so on, you'll do so using classes that come prepackaged with the CLR. These are written with security demands that indicate to the system the type of action that was requested, giving the system a chance to grant or deny the request. If the system denies the request, it does so by throwing an exception of type SecurityException. How does the system decide whether to grant or deny each request? It does so by looking at a security policy that can be customized on a machine-by-machine basis and also on a user-by-user basis.
      Security policy in the CLR is really quite simple at a conceptual level. The security policy poses questions to assemblies at load time. Today there are two common questions, and they may be asked in slightly different ways, as you'll see later. Where did this assembly come from? Who authored this assembly?
      Security policy is the way that various answers to these questions can be mapped onto specific sets of permissions. For instance, you can say "Code that came from https://www.foobar.com/baz is allowed to read files from c:\quux and below, and it should not be allowed to display dialog boxes." In this case, the question being posed is "What URL did this assembly come from?" which is simply a variation on the first question. From a 10,000-foot view, this summarizes CLR code access security, but it becomes more interesting when you drill down a bit. Let's do that now.

Demanding Permissions

      Imagine you were writing the following class to allow simple reading or writing from a file (implementations are omitted):

  public class MyFileAccessor {
  
public MyFileAccessor(String path,
bool readOnly) {}
public void Close() {}
public String ReadString() {}
public void WriteString(String stringToWrite) {}
}

 

 

Here's how the class would typically be used:

  class Test {
  
public static void Main() {
MyFileAccessor fa = new
MyFileAccessor("c:\foo.txt", false);
fa.WriteString("Hello ");
fa.WriteString("world");
fa.Close(); // flush the file
}
}

 

 

      Given this usage model, it's clear that the simplest place to put a security check is in the constructor. The constructor arguments tell you exactly which file is to be accessed, and whether it will be accessed in read or read/write mode. If you present this information to the underlying security policy, and this results in a SecurityException, you'll simply allow that exception to propagate back to the caller. In this case, since the constructor never completes, the caller is denied an instance of your class and thus will not be able to make calls to any of the other nonstatic member functions. This simplifies the security checks for the class tremendously, which is good from a programmer's perspective, but also good from a security perspective. The less access-checking logic you write in your code, the fewer chances you have to get it wrong.
      The drawback to this approach can be seen in this scenario. If a client constructs an instance of your class after the initial demand in the constructor, and if the client then shares that reference with another client (potentially in another assembly), the new client won't be subject to your constructor-centric code access demands. This is similar to the way the kernel handles work today in Windows 2000. Here, again, you see the tension between performance and security.
      Now in the real world, you won't be writing classes like MyFileAccessor to access files. Instead, you'll use system-provided classes such as System.IO.FileStream, which have already been designed to do the appropriate security checks for you. They're often done in the constructor with all of the performance and security implications I discussed earlier. As with much of the Microsoft® .NET architecture, for purposes of extensibility the security checks invoked by the FileStream class can also be used directly by your own components. Figure 1 shows how you might add the same security check to the constructor for MyFileAccessor, if you couldn't rely on a system-provided class to do the checks on your behalf.
      The code in Figure 1 performs a security check in two steps. It first creates an object that represents the permission in question, in this case, access to a file. It then demands that permission. This causes the system to look at the permissions of the caller, and if the caller doesn't have this particular permission, Demand throws a SecurityException. Actually, Demand does a bit more sophisticated checking than this, as you'll see shortly.
      Figure 2 summarizes the various code access permissions currently exposed by the runtime. Be aware that there also exists a set of code identity permissions that directly test the answers to the various questions that I mentioned in the overview section. To avoid hardcoding security policy decisions into components, however, code identity permissions are less commonly used by general-purpose code. I'll talk more about security policy later.
      At this point it's useful to note that even if the code successfully gets past the .NET security check, the underlying operating system will also do its own access checks. (Windows 2000 and Windows NT®, for instance, restrict access to files on NTFS partitions via access control lists.) So even if an assembly is granted unrestricted access to the local file system by .NET security, and the components from that assembly are being hosted in a process running as Alice, those components will only be able to open files that Alice could normally open based on the security policy of the underlying operating system.

The Luring Attack

      Recall that the two basic questions you ask an assembly in order to determine what permissions it should be granted are "Where did this assembly come from?" and "Who authored this assembly?" The answers to these questions dictate the base set of permissions that the assembly will be granted.

Figure 3 MyFileAccessor
Figure 3 MyFileAccessor

      Imagine that MyFileAccessor was implemented in terms of a FileStream object, as shown in Figure 3. Here's what the code might look like:

  using System.IO;
  
public class MyFileAccessor {
private FileStream m_fs;
public MyFileAccessor(String path,
bool readOnly) {

m_fs = new FileStream(path, FileMode.Open,
readOnly ? FileAccess.Read :
FileAccess.ReadWrite);
}
// ...
}

 

 

      Assume that you've implemented MyFileAccessor in this way, and given it to your friend Alice, who has installed it on her local hard drive. What sort of code access permissions does the MyFileAccessor assembly have now? If I look at my own local security policy, I see that it grants local components a permission set named FullTrust, which includes unrestricted access to the file system (you should remember that the underlying operating system may limit this further).
      Figure 4 shows that MyFileAccessor, installed on Alice's local hard drive, may be used by various types of components. It's possible for a local, trusted component to use MyFileAccessor to access files. However, if Alice happens to point her browser to some rogue Web site, a downloaded .NET component from that site could use MyFileAccessor for evil purposes. This is an example of a luring attack: MyFileAccessor can be lured into doing evil things (like opening up private documents that Alice would rather not expose).

Figure 4 A Luring Attack Example
Figure 4 A Luring Attack Example

      Rather than forcing each middleman component to do its own access checks to avoid this sort of funny business, the CLR simply verifies that every caller in the call chain has the permissions demanded by the FileStream. In Figure 4, when a local component called NotepadEx uses MyFileAccessor to open files, it will be given unrestricted access because the entire call chain originates from locally installed assemblies. However, when RogueComponent attempts to use MyFileAccessor to open files, when FileStream calls Demand, the CLR will walk up the stack and then discover that one of the callers in the call chain doesn't have the requisite permissions, therefore Demand will throw a SecurityException.

Implied Permissions

      Before going any further, it's useful to note that some permissions, when granted, imply others. For instance, if you are granted all access to the directory c:\temp, you are implicitly granted all access to its children, grandchildren, and so on. You can discover this via the IsSubsetOf method present on all code access permission objects. For instance, if you run the code in Figure 5, you'll get the output:

  p2 is subset of p1
  

 

 

      The presence of implied permissions makes administration considerably easier, but remember to be as specific as you can when demanding permissions. The CLR will automatically compare your demand to see if it's a subset of a granted permission.

Protecting Yourself

      Assemblies are given a basic set of permissions at load time. The CLR and its host discover these permissions by asking questions about the assembly and giving the answers to the security policy, which converts them into permissions. However, just because an assembly is granted a basic set of permissions based on policy doesn't mean that all of these permissions will be available to satisfy demands at runtime. Let me explain.
      If an assembly is installed locally, it will likely have wide-ranging, even completely unrestricted permissions, at least as far as code access security is concerned. Imagine if one of these highly trusted assemblies were to make a call to a user-provided script. Depending on what that script is supposed to do, the assembly invoking the script might want to restrict the effective permissions before making the call (see Figure 6).
      This code places extra restrictions in the current stack frame. This means that if Calculate were to try to access an environment variable, sneak a peek at the contents of the clipboard, or mess around with files anywhere in the file system underneath the two sensitive directories specified in the code, or if any components that Calculate used internally tried to do any of these things, when the CLR walked the stack to check access, it would note that a stack frame explicitly denies these permissions and would deny the request. Note that in this case, I've grouped several permissions into a single PermissionSet and denied the entire set. This is because each stack frame can have at most one permission set used for denial, and calling the Deny function replaces the old set for the current stack frame. This means that the following code does not do what you might expect it to do:

  FileIOPermission p1 = new FileIOPermission(
  
FileIOPermissionAccess.AllAccess,
"c:\\sensitiveStuff");
FileIOPermission p2 = new FileIOPermission(
FileIOPermissionAccess.AllAccess,
"c:\\moreSensitiveStuff");
p1.Deny(); // p1 is denied
p2.Deny(); // now p2 is denied (not p1)

 

 

      In this code, the second call to Deny effectively overwrites the first, so only p2 will be denied in this case. Using a permission set allows more than one permission to be denied simultaneously. Calling the static RevertDeny function on the CodeAccessPermission class empties the denial permission set for the current stack frame.
      If you find yourself denying lots of individual permissions, you might find it easier to take another approach, and instead of using Deny and RevertDeny, you can use PermitOnly and RevertPermitOnly. This works well when you know exactly which permissions you'd like to allow.

Asserting Your Own Authority

      The stack-walking mechanism works to protect general-purpose classes like FileStream and MyFileAccessor, which can be used in many different ways. For example, a good usage of FileStream would be to log errors to a well-defined log file in a well-defined directory on the user's hard drive. An evil usage of FileStream would be to compromise the contents of the local security policy file. The stack-walking mechanism is designed to assure FileStream that no matter how many intermediate assemblies are traversed before the FileStream object is instantiated, the demand for file system access must be satisfied by all of those assemblies. This avoids luring attacks like the one I described earlier.
      With all the goodness it provides, sometimes the stack-walking mechanism simply gets in the way. Figure 7 shows a class called ErrorLogger that provides the well-defined error logging service I mentioned earlier.
      Imagine that the ErrorLogger class was installed on Alice's local hard drive, and that its assembly was granted full access to the file system by the code access security policy on Alice's machine (as of this writing, local components are granted unrestricted access to the file system in the default security policy). But what if this class was designed to provide service to other assemblies, some of which aren't granted permissions to write to the local file system?
      Clearly, ErrorLogger is considerably safer than MyFileAccessor for use by arbitrary components, which allows access to any file the client specifies. ErrorLogger is a simple class that can only be used to append strings to a single well-defined file. But because the stack-walking mechanism doesn't know this, when the FileStream constructor demands permissions of its callers, the demand will fail unless every caller in the chain has the FileIOPermission demanded. If this is an impediment, it can be removed by having ErrorLogger assert its own authority to write to the log file. Figure 8 shows the new implementation.
      The new version of the ErrorLogger class, installed as a local component, will also be granted full access to the file system. In this case, it asserts a file IO permission before using the FileStream to actually open the file. Note that you can only assert permissions that your assembly actually has been granted.
      Each stack frame has the potential to have an asserted permission set, and when the stack walk reaches that stack frame, it will consider the asserted permissions satisfied. The stack walk won't even continue unless there are other permissions being demanded that aren't satisfied by the asserted permission set. Note that I didn't bother to call RevertAssert. RevertAssert isn't necessary in this case, as the assertion can safely stay in place until the call to Log returns, at which point the stack frame, including the asserted permission set, is torn down. This also applies to Deny and PermitOnly.

Whither Assert?

      Clearly, the ability to assert permissions can be abused. For instance, a locally installed component that is granted full trust could simply assert all permissions and do whatever it likes, no matter who its clients are. This is clearly a horrible idea, but how can you know that a component won't do this once installed on your machine? Since assertion is such a powerful facility with the potential for abuse, its usage is also governed by a permission class, SecurityPermission. This class actually represents several different permissions governing usage of security-related classes and policy. You can think of most of these permissions as meta-permissions.
      Consider the ErrorLogger2 class shown in Figure 8. By asserting its own authority to write to a single distinguished file, has it subverted the security policy of the system? What sorts of attacks are possible? A rogue component could inject fake error messages to confuse the user. It could also send very large strings to try to fill up the user's hard drive. So even though ErrorLogger2 seems to be safer than a more general-purpose class like MyFileAccessor when asserting its own authority, there are still attacks that can occur simply because of the assertion.
      Should you simply avoid using Assert because of issues like this? Like most questions about security, there isn't a definite answer. It certainly complicates the security model, so a good rule of thumb would be to get a jury of your peers to review any use of this feature. Also note that your assertion may be denied by security policy, as many administrators will want to disallow assertions for all but the most trusted local components. This could bring your application to a screeching halt if you rely on assertions throughout your code. You might find it helpful to specifically catch the SecurityException generated by your call to Assert and attempt to do your work in the presence of a full stack walk when assertions are disallowed.
      One instance in which assertions are absolutely essential is when crossing the boundary from managed to unmanaged code. Consider the system-defined FileStream class as an example. Clearly, this class needs to make calls into the underlying OS, which is implemented in unmanaged code, in order to actually open, close, read, and write files. The interop layer will demand a SecurityPermission when these calls are made, specifically demanding the UnmanagedCode permission. If this demand were to propagate up the stack, no code would be allowed to open files unless also granted the permission to make calls into unmanaged code.
      The FileStream class effectively converts this extremely generic demand into a more granular demand, specifically a demand for FileIOPermission. It does this by demanding a FileIOPermission in its constructor. If this demand succeeds, the FileStream object feels comfortable asserting the UnmanagedCode permission before actually making calls to the operating system. The unmanaged calls made by FileStream are not random calls into unmanaged code; rather, they are calls that open a specific file for a specific purpose, an intention indicated by the earlier demand in the constructor. The mscorlib assembly, which hosts FileStream and other trusted components, is considered trusted to perform these policy conversions and is thus granted the Assertion permission. Before trusting any other assemblies with the Assertion permission, you should have a high degree of trust that the assembly will help enforce, not subvert, your security policy.

Declarative Attributes

      If you plan to use Deny, PermitOnly, or Assert in your components, be aware that each of these actions can be accomplished not only programmatically, but also declaratively. For instance, Figure 9 shows a third implementation of the error logger that uses a declarative attribute, in this case, FileIOPermissionAttribute. You should remember that in C#, the Attribute suffix can be omitted for brevity when declaring attributes.
      There are a couple of benefits to using this approach. First, it's a bit easier to type. Second, and more importantly, declarative attributes become part of the metadata for the component, and can be discovered easily via reflection. This would allow a tool to scan an assembly and discover, for instance, whether it makes use of assertions, and could perhaps list the methods and classes that assert various permissions. The tool could also discover potential conflicts with security policy; remember, assertions will often be disallowed, especially if the component isn't installed on the local hard drive.
      The main drawback to this approach is that it's impossible for the method to catch an exception if the assertion request is denied. This particular drawback applies to asserting permissions. You'll never have this problem if you're simply restricting permissions with declarative attributes.
      The SecurityAction enumeration is used with declarative permission attributes and includes several options that can be used to fine-tune the permissions available to your code, as well as demand permissions of your clients, either at load time or at runtime. Figure 10, taken from the .NET Framework SDK documentation, lists and categorizes these options. For example, compare the following two attribute declarations:

  [SecurityPermission(SecurityAction.Demand,
  
UnmanagedCode = true)]
[SecurityPermission(SecurityAction.LinkDemand,
UnmanagedCode = true)]

 

 

      If the first of these declarations were applied to a method, a normal stack walk would take place for each call to the method at runtime. On the other hand, if the second declaration was used, the check would only take place once for each reference to the protected method. This would occur at just-in-time (JIT) compile time. Also, the second declaration only demands the permission of the code linking to it; a full stack walk is not performed for LinkDemand. I'll come back to some of the other attributes in this list in the discussion of security policy later in this article.

Attacks Against Code Access Security

      As I have more time to experiment with the code access security infrastructure, I expect to see other interesting attacks come to light. Right now, a couple of attacks you will obviously be susceptible to are misuse of the Assertion and UnmanagedCode security permissions. I've discussed the dangers of assertion already, but calling into unmanaged code is another tricky issue.
      If an assembly is allowed to call into unmanaged code, it can bypass virtually all code access security. For instance, if an assembly is not granted permission to the local file system, but is allowed to call into unmanaged code, it can simply make direct calls to the Win32® file system API to do its dirty work. As I mentioned earlier, these calls will be subject to whatever operating system security checks are in force, but this often isn't reassuring, especially when the attacker's code ends up getting loaded into a privileged environment such as an administrator's browser or a daemon process running in the SYSTEM logon session.
      From an administrator's logon session, you could easily imagine an attacker using the Win32 file APIs to simply rewrite the security policy for the local machine, which is currently stored in an administrator-writable XML file. Or for that matter, the attacker could use the same Win32 file APIs to replace the CLR execution engine itself. All bets are off when an attacker executes unmanaged code with administrative privileges. Clearly, these attacks can be thwarted by careful management of security policy, which I'll discuss next.
      Another obvious attack involves permission escalation by change of location. When an assembly is used from an Internet URL, it will typically have significantly fewer permissions than when it is installed locally. One of the first goals of an attacker will be to try and convince the victim to install a copy of the assembly on his local hard drive, thus immediately escalating its permissions. With so many users already willing to install ActiveX controls from Internet sites without much thought, this will be a challenging problem. Watch my Web site for more ideas about potential attacks and thoughts on prevention (see https://www.develop.com/kbrown).

Security Policy

      Throughout this article, I've been hinting at the presence of a security policy for assigning code access permissions. This policy can become rather sophisticated, but the ideas are quite easy to understand once you grasp the basics. First of all, it's important to note that permissions are assigned on a per-assembly basis. I've broken down the process of discovering these permissions into three basic steps:

  1. Gather evidence
    • Present evidence to security policy and discover assigned permission set
      • Fine-tune permission set based on assembly requirements

Evidence

      When I first started experimenting with the CLR, I thought evidence was a strange term. It sounded more like the security infrastructure had been designed by a group of lawyers, rather than by a group of computer scientists. But after spending some time with the CLR, I discovered that the name is really quite appropriate. Evidence in a courtroom supplies information that can help to answer questions asked by the jury: "What was the murder weapon?" or "Who signed the contract?"
      In the case of the CLR, evidence is the set of answers to questions posed by security policy. Based on these answers, security policy can automatically grant permissions to code. Here are the questions posed by policy as of this writing:

  • From what site was this assembly obtained?
    • From what URL was this assembly obtained?
      • From what zone was this assembly obtained?
        • What's the strong name of this assembly?
          • Who signed this assembly?

      The first three questions are just different ways of querying the location from which the assembly originated, while the remaining two questions focus attention on the author of the assembly.
      In a courtroom, evidence is submitted by one party, but can be challenged by the opposing party, and often the jury helps decide whether the evidence is well founded. In the case of the CLR, there are two entities that may gather evidence: the CLR itself and the host of the application domain. Since this is an automated system, there is no jury; whoever submits evidence to be evaluated by policy must be trusted to not submit false evidence. This is the reason for the special security permission ControlEvidence. The CLR itself is naturally trusted to provide evidence, since you must already trust it to enforce the security policy. Therefore the ControlEvidence security permission applies to hosts. As of this writing, three hosts are provided by default: Microsoft Internet Explorer, ASP.NET, and the shell host, which launches CLR applications from the shell.
      To make this more concrete, consider the following function found on the System.AppDomain class:

  public int ExecuteAssembly(
  
string fileName,
Evidence assemblySecurity,
);

 

 

      Although a browser may have already downloaded the assembly into a cache on the local file system, it should provide the evidence for the actual origin of the assembly via the second parameter.

Evaluating Security Policy

      Once the host and the CLR have gathered all the evidence, it is submitted to the security policy as a set of objects, encapsulated in a single collection object of type Evidence. The type of each object in this collection indicates the type of evidence it represents, and there are classes of evidence representing each of the questions I listed above:

  Site
  
Url
ApplicationDirectory
Zone
StrongName
Publisher

 

 

      Security policy is composed from three different levels, each of which is a collection of serialized objects. Each of these objects is called a code group, and represents a question posed to the assembly along with a reference to the permission set that should result if the evidence satisfies the question. The question is technically called a membership condition, and the permission sets are named so that administrators can reuse them. Figure 11 shows a membership condition and a corresponding named permission set.

Figure 11 Code Group with Membership Condition
Figure 11 Code Group with Membership Condition

      I've always found the term "code group" to be somewhat confusing, and since I haven't yet come up with a better term, I usually just think of a code group as a node in the graph that makes up a security policy level. Figure 12 shows how a set of code groups, or nodes, forms a hierarchy with a single root. Remember, each node in the policy level represents a membership condition and a reference to a permission set, so by taking the gathered evidence and matching it to the nodes in the hierarchy, the CLR ends up with a union of permission sets that represents the permissions granted by that level of policy. Since the root node is really just a starting place for the traversal, it matches all code and by default refers to the permission set named Nothing, which—you guessed it—contains no permissions.

Figure 12 Security Policy Level Graph
Figure 12 Security Policy Level Graph

      The actual traversal of the graph is governed by a couple of rules. First of all, if a parent node doesn't match, none of its children are tested for matches. This allows the graph to represent something akin to AND and OR logical operators. Second, each node also has the potential to have attributes that govern the traversal. The attribute that applies here is Exclusive, and if a node with the Exclusive attribute is matched, only the permission set for that particular node will be used. Naturally it doesn't have any meaning for two matching nodes in a policy level to have this attribute, and this is considered an error. It's up to the system administrator to make sure this doesn't happen, but if it does, the system throws a PolicyException and the assembly is not loaded.

Figure 13 Traversing a Policy Level
Figure 13 Traversing a Policy Level

      Figure 13 shows an example of an assembly downloaded from https://q.com/downloads/foobar.dll, signed by ACME Corporation. Note how four nodes in the graph are matched, and that only one of the publisher nodes matched during the traversal. The left half of this graph illustrates a couple of logical AND relationships for code that comes from ACME Corporation. It says, "code that is published by ACME Corporation AND is downloaded from the Internet gets permission sets bar and baz, while code that is published by ACME Corporation AND is installed locally gets permission sets foo and gimp."
      You may be wondering at this point why I keep talking about levels of policy. The reason is that there are actually three possible policy levels, each of which contains a graph of nodes as you saw in Figure 12. There is a machine policy level, a user policy level, and an application domain policy level, and they are evaluated in that order. The resulting permission set is the intersection of the permission sets discovered during the traversal of the graphs in each of these three policy levels.

Figure 14 Three Policy Levels
Figure 14 Three Policy Levels

      The application domain policy level is technically optional, and is provided dynamically by the host. The most obvious example of this feature is a Web browser, which may want the option of a more restrictive policy for its app domains. Figure 14 shows how I like to think of the policy levels. You can use yet another attribute on a node to halt the traversal of policy levels: LevelFinal. If this attribute is discovered on a matching node, no further policy levels will be traversed. For instance, this allows the domain administrator to make statements at the machine policy level that cannot be changed by individual users by editing user-level policy.

Fine-tuning the Permission Set

      Once the CLR gathers a set of permissions from the three policy levels, a final step allows the assembly itself to take a stand. Recall that code can fine-tune the available permission set at runtime, either programmatically or declaratively, by denying or asserting permissions. Well, an assembly can fine-tune the permissions granted to it by policy via careful use of these three elements of the SecurityAction enumeration (also shown in Figure 10):

  SecurityAction.RequestMinimum
  
SecurityAction.RequestOptional
SecurityAction.RequestRefuse

 

 

      The names of these elements pretty much say it all. If the minimum set of permissions requested by the assembly isn't granted by policy, the assembly won't run. When used sparingly, this particular feature allows you to make some assumptions about your environment, and might make programming a bit easier. However, when used too much, this feature will likely leave a bad taste in your mouth. Using RequestMinimum, for instance, to ask for all permissions your assembly might possibly need, will cause it to fail to load in more circumstances than might be necessary. This also might lead an administrator to loosen up his security policy just a tad more than necessary in order to allow your component to run.
      RequestRefuse seems, at least in these early stages, to be a useful tool to use liberally. This allows you to simply deny yourself permissions that you might have been granted by policy. Make it a point to refuse the set of permissions that you know your assembly doesn't need. It certainly can't hurt to play it safe.
      Finally, RequestOptional allows you to specify optional permissions that you can live without, but can also use if available. This is useful if your assembly exposes optional features that require a few extra permissions.
      Given the set of permissions derived from policy, plus the set of minimum, optional, and refused permissions on the assembly, here's a formula described in the CLR documentation that determines the granted permissions for an assembly:

  G = M + (O�P) - R
  

 

 

      Where G = Granted Permissions, M = Minimum request, O = Optional request, P = Policy-derived permissions, and R = Refused permissions.

Viewing and Editing Security Policy

      If you want to poke around with security policy, check out CASPOL.EXE, the code access security policy tool. Here are a few of my favorite command lines to get you started:

  caspol -a -listgroups
  
caspol -a -resolvegroup c:\inetpub\wwwroot\bar.dll
caspol -a -resolveperm c:\inetpub\wwwroot\bar.dll

 

 

      The first example lists the code groups for both machine and user policy levels. If you look closely, you'll see a hierarchy of nodes, each of which has a membership condition followed by the name of a permission set. The second example requests a list of matching code groups for a particular assembly, while the third actually resolves the permissions for the assembly.
      See how things change when you refer to the same assembly via HTTP, for instance:

  caspol -a -resolvegroup https://localhost/foo.dll
  

 

 

While CASPOL.EXE can be used to edit security policy, unless I'm doing something pretty simple, I prefer to simply bring up EMACS and edit the policy file by hand, since it's an XML document. Please make a backup of your original file if you decide to try this yourself. As of this writing, you can find the machine policy file in %SYSTEMROOT%\ComPlus\v2000.14.1812\security.cfg. Your version number may not match mine, but you get the idea. The user security policy is stored in the user profile directory under the same path. As of this writing, the default user policy grants all code FullTrust, which effectively means that the security policy is completely governed by the machine policy.

Conclusion

      Code access security, when combined with code verification in the CLR, provides a significant step away from the laissez-faire approach taken in previous generations of the platform, where slinging DLLs was considered tremendously fashionable compared to simply building large monolithic applications that were arguably much more secure.
      Code access security acknowledges the fact that today's applications are built from components. It also considers the source of those components in making security policy decisions that are preventive as opposed to punitive, and should generally enhance the safety of many of the emerging class of mobile code apps.
      Code access security is absolutely not a silver bullet. It introduces a whole host of complexities, not the least of which is the challenge of administration. Without educated administrators who are willing to take the time to understand this feature, it may simply become a screen behind which many new attacks will surface. It would be wise to look at the spotty history of Java security, which has dealt with mobile code for several years now with varying degrees of success, and evaluate this new architecture with history in mind. Visit my Web site, where I am collecting the latest in .NET security news and sample code, as well as references to existing works on mobile code security. Finally, feel free to share with me your comments and concerns regarding the CLR security architecture.

For related articles see:
Code Access Security
.NET Framework Developer Center
Keith Brown works at DevelopMentor researching, writing, teaching, and promoting an awareness of security among programmers. Keith authored Programming Windows Security (Addison-Wesley, 2000). He coauthored Essential COM, and is currently working on a .NET security book.

From the February 2001 issue of MSDN Magazine