Export (0) Print
Expand All

The .NET Developer's Guide to Identity

Windows Server 2003

Keith Brown

June 2006

This is a roadmap for developers and architects who want to learn how to build identity-aware applications on the Microsoft® Windows® platform. From authentication and authorization to federated identity, you'll discover techniques that can be used today and in the future to leverage identity infrastructure such as Active Directory®. This reduces the need to build your own identity infrastructure, which only fragments identity and leads to silos and higher costs. Throughout this roadmap, you'll find links to other resources that provide more detail.


Programming Identity-Enabled .NET Applications
Federated Identity
Security in the Windows Communication Foundation (WCF)
Authorization Strategies


When Microsoft first asked me to write a white paper for developers about building identity-aware applications, I was initially hesitant, because I was concerned that I'd not have the freedom to tell the story the way I like to tell it. And boy do I like to tell this story: I've only been refining it for the last six years! But now that the paper is written and I'm finally adding this introduction, I can tell you that this is indeed my story, and not a bunch of marketing drivel. This is a real, practical, tale that will lead you through the identity landscape on the .NET platform and help get you started designing and building effective identity-aware applications. If I've been successful, this story will help you put together many pieces of the puzzle that the SDK documentation only hints at. And you'll get a glimpse of what is coming in the future so you can prepare today.

What is an "identity-aware" application, anyway? In my mind, first and foremost it's an application that relies upon details of its clients' identity, adjusting its behavior based on those details. That's why the first section of this paper focuses on authentication. Another part of being identity-aware is being directory aware. I'm not here to tell you to throw away SQL Server in favor of Active Directory, but I am here to tell you that there's a whole lot to be gained by relying more on that directory. You may be surprised how few lines of code it takes to look up a user's e-mail address or phone number, and this might be just enough to give an ISV that competitive edge he's been looking for. And if you're looking for a self-contained directory service you can include with your application, Active Directory Application Mode (ADAM) is for you, and I'll show you where to go for more information later in this paper.

Another thing that identity-aware apps typically do is authorize their users to perform various operations. That's the second section of this paper, where I emphasize how helpful it can be to factor authorization policy out of your code so that it can be administered easily. I'll take what you already know about groups and build on that so you can understand role-based authorization, and later on claims-based authorization, which is the future on this platform. I'll also help you choose between a trusted subsystem model versus impersonation, which is a critical design decision when you're building multi-tier systems. It's important to know what the tensions are.

While these first two sections are pretty conceptual, the next section is all about programming. While one of my goals is to get you writing less code and relying more on the platform for security features, there will still come a time when you've just gotta write some code. Here I'll show the Zen of role-based security in the .NET Framework; how infrastructure communicates identity to your application via Thread.CurrentPrincipal. I'll also show the five or so lines of code it takes to look up data in Active Directory when you need it (see, I told you it was easy). Then I'll show you the minimal code you need to get started using Authorization Manager (AzMan), a hidden gem on the Windows platform that can help you factor out an authorization policy from your application.

Then I'll get conceptual again and tell you all about federated identity, which will prepare you for what's coming in the future, as well as introduce you to Active Directory Federation Services, which shipped in Windows Server® 2003 R2. Here you'll learn what a claim is, what claims-based authorization is all about, and how claims transformation and security token services (STS) can be a powerful force for bringing companies together across the Internet. Once you understand these underpinnings of federated identity, I will walk you through the identity metasystem and show how InfoCard fits into the picture, providing the foundation for an identity layer on the Internet.

Because the Windows Communication Foundation (WCF) will soon be the de facto communication stack for Windows, and a very big piece of it deals with helping you build identity-aware applications, I'll talk about designing and programming identity in WCF in the final section. I'll talk about the typical bindings you'll want to use and the tweaks you can make to those bindings to support the identity infrastructure you need. And since WCF is one of the main ways to program federated identity, I'll show you how to access claims from the WCF security context.

This paper is intended to be more of a roadmap than a step-by-step guide to building an identity-aware application. Wherever there are places you'd like to drill down further, I'll supply links to other more detailed documents you can use to follow up. But don't think that this paper is just for pointy-haired weenies. This is for designers and developers; its goal is to communicate the Zen of identity on this platform. Enjoy!


Authentication is a fascinating topic. Traditionally it's been all about getting an answer to the question, "Who are you?" There are many ways of answering this question, and they all involve challenging the user to provide evidence of some sort: "What do you know?" is one challenge that is often used (some would say overused) in computing. This is where a user supplies a secret that he shares with the security system, which often takes the form of a password or PIN code. "What do you have?" is another common challenge that you encounter whenever you use an ATM card to withdraw money or buy groceries. In this case you're also asked for a PIN code, so you've got two-factor authentication: two pieces of evidence are required for the user to proceed. A smart card with a PIN code is a similar but even stronger solution being deployed by many companies today, including Microsoft. A biometric is a third piece of evidence sometimes used in high-security systems. Anything from a fingerprint or hand geometry reader to a retinal scanner presents the challenge, "What are you made of?"

In the past we've thought of authentication as a way of discovering the client's name, thus, "Who are you?" was an appropriate question. But in the future, as you'll see later in this paper, the notion of identity is not necessarily going to be tied to a user's name. Perhaps you don't care who the caller is, but only that the caller is authorized to perform the action being requested. So a more modern phrasing of this question might be, "What do I know about the sender of this message?"

Choosing how to authenticate users is the first step toward building an identity-aware application. But it's a hard problem, and as you'll see, it's rarely a good idea to build your own solution.

Authentication Is a Hard Problem

Imagine you have an application that is listening on a socket. When you receive some data over that socket, do you know anything for certain about the sender? This is a surprisingly difficult and subtle question to answer! You likely already know that you can't rely on the IP address in the message header, because it's relatively easy to spoof. So you set about building a more secure solution.

You might start by including a user name in the message. This is the first step toward authenticating the user: she's got to have some way to tell you who she is. This is known as identification. And if you trusted all of the users of your system to identify themselves correctly, you could stop here. But one of the things we strive to do in secure systems is reduce the need for trust, because inevitably somebody is going to break the rules.

To solve this problem, let's say you introduce a secret password for each user, and require them to submit not only their user name, but also their password in each message. By stating their password each time, the user is supplying proof of their identity. Now in order to cheat, the attacker needs to learn the password for the user he wants to impersonate. But this isn't hard since the password is sent in the clear. Ethernet networks, even those with switches, are susceptible to sniffing attacks. If you've ever used a tool such as Netmon or Ethereal, you know just how easy it is to spy on your neighbor's network traffic.

So instead of sending the password in the message, you decide instead to use the password to derive a cryptographic key (there are even standards for doing this: see http://www.faqs.org/rfcs/rfc2898.html). The client can then use that key to sign the message, and the receiver can later verify the signature of the message received. This is called proof of possession. Cryptography provides many ways to prove knowledge of a secret without disclosing the secret itself. Your solution is starting to get more complicated now, but you're probably feeling a little better about it.

Sadly, this countermeasure won't stop a determined attacker. A password is a really lousy source for a key. Because users cannot remember long, complicated passwords, the key becomes quite easy to guess. And passphrases aren't much better. A wise attacker will start by guessing passwords or passphrases by using dictionary words and permutations of those words. He knows that he's done when the signature verifies, or the ciphertext decrypts to something reasonable. This attack can be automated and run offline on the attacker's time, where he can devote all the computing resources he owns to the problem. Consider that most successful attackers own an awful lot of machines; hopefully none of them happen to reside in your home or office!

Even the venerable Kerberos protocol is subject to these sorts of attacks when weak passwords are in use, which is why companies are moving toward smart cards, where much stronger keys can be used. And if you're running Windows, your platform supports this.

I hope I've convinced you that authentication is a really hard problem! This is not a job you want to take on yourself. A much better plan is to rely on expert cryptographers to build this sort of plumbing for you. Windows has the plumbing you need: all you have to do is choose to use it.

Let the Platform Do the Heavy Lifting: Single Sign On

Out of the box, Windows comes with several strong authentication protocols built in. By building an application that simply relies on the operating system to provide authentication, you avoid having to design your own authentication protocols. I don't know of any programmer who enjoys building password databases; that's a very uncomfortable position to be in! Besides, who needs another silo for identity data? Users already have way too many credentials to manage as it is, and the cost of managing all these identities is going through the roof for businesses. By offering application software that leverages platform authentication, you're going to immediately gain many strong advocates for your cause: the system administrators in the IT department.

Think about what happens when a new user joins the company. If your application relies on platform authentication, you really don't care! Using groups, the system administrators will be able to grant the new user access to whatever resources she needs, without having to touch your application. But if you've implemented your own authentication scheme, an administrator is going to need to configure a new user account in your application. This process is known as user provisioning, and it's a major pain point for IT staff in any company.

Even worse, what happens when a user leaves the company? If the administrator forgets to deprovision the user account for your application, you've now got a non-employee with credentials that might let her continue to access resources that she shouldn't be allowed to use!

I hope you can see that having a single set of credentials to access internal company resources is a huge win for employees and IT staff alike. But I've not even touched on the really magical benefit that the end user sees. Single sign on (SSO) is what I'm talking about. A single sign on experience is one of the best ways you can contribute to security transparency. By leveraging built-in platform authentication, any user who has logged into her workstation using a domain account can immediately access your application without being asked to log in again. She can swiftly move around the network, doing her work efficiently, without encountering friction from the security system.

There are companies that spend fortunes trying to implement SSO across the myriad of applications they've developed or purchased over the years. What I'm here to tell you is that you can get this feature for free; all you have to do is agree to write less code and instead rely on the platform to do the heavy lifting.

Authentication Protocols Supported by Windows

There are many authentication protocols built into Windows, but the heavy hitters are Kerberos and SSL/TLS. Since you'll most often encounter these, a brief description of both is in order. And in order to explain smart card login and SSL/TLS, you'll need a little background on certificates and PKI.

Understanding the basics of how these protocols work can help you when architecting new systems or debugging problems in existing systems. Since the claims-based systems of the future rely on many of the same ideas that these protocols were built around (proof keys are one example), you'll have an easier time understanding how InfoCard, say, works under the hood. Besides that, they are just plain interesting!


Kerberos v5 is the authentication protocol used in Windows domains by default. Technically Windows uses a negotiation protocol called Simple Protected Negotiation (SPNEGO) to negotiate between Kerberos and the Windows NT 4 challenge-response protocol (NTLM), but Kerberos is the preferred protocol.

The name Kerberos comes from Greek mythology: it's the three-headed dog that guards the entrance to Hades (the joke being that it ought to be guarding the exit). The three heads represent the three parties involved in the transaction: client, service, and the KDC, or key distribution center. The KDC is the keeper of secrets, and in Windows this is the domain controller.

Each user and computer in a domain has a master key. For users this key is derived from your password by hashing it. The KDC has a copy of the master keys for each user and computer in the domain, and can therefore vouch for their identity. In fact, that's where the KDC gets its name: its job is to use the master keys it shares with all domain users to securely distribute a key that one user, say Alice, can use to authenticate with a service, which must be configured with valid domain credentials. To make this more concrete, let's say the service in this case is an IIS-hosted Web application, where the application pool has been configured to run under an account called Bob.

Figure 1. Example of Kerberos authentication protocol

When Alice logs into her workstation in the morning, she types in her password. Her workstation hashes this to discover her master key, which is then used to request a ticket from the KDC. This initial ticket is called a Ticket Granting Ticket (TGT), and will be used by Alice later in the day when she needs to authenticate with other services on the network. Kerberos tickets contain a lot of things, but most important are the client's name (Alice in this case) and a proof key. Each ticket is encrypted with the target service's master key, and in the case of a TGT, the target service is the KDC itself.

Think about that for a moment. The KDC produces a random proof key and puts it together with Alice's name into a ticket, which is then encrypted so Alice cannot read it. How does Alice ever discover the proof key? Well, when the KDC sends the TGT back to Alice, alongside the TGT it sends a copy of the proof key for Alice to use. This copy is encrypted with Alice's master key. You can see the KDC doing its job here; it's distributing a proof key for Alice to use later on! The workstation decrypts this proof key using Alice's master key and caches the ticket along with the proof key in Alice's logon session. As long as Alice stays logged on to the workstation, this ticket and proof key remain cached down in the Kernel for her. There's no need for Alice to type in her password again; she's effectively traded her password for a Kerberos ticket.

Later when Alice needs to authenticate with a service on the network (Bob in this case), she'll go to the KDC and ask for a ticket for that service. In order to prove to the KDC that she's really Alice, she presents the TGT she received earlier along with something called an authenticator. This is simply Alice's name along with a timestamp, all encrypted with the proof key. Hopefully you can see the chain of evidence here: the authenticator acts as proof of possession of the key the KDC sent to Alice with her TGT. Because that proof key was originally sent encrypted with Alice's master key, if Alice can prove she knows the proof key, the KDC has the evidence that shows this is indeed someone who knows Alice's password.

Note   If you want the gory details, the KDC decrypts the TGT, pulls out Alice's name and the proof key, and uses the latter to decrypt the authenticator. If the name in the authenticator matches the name in the TGT, this shows the encryption succeeded. The timestamp in the authenticator is then checked against a replay cache, which records a window of authenticators seen recently (within the last five minutes) to ensure that someone isn't simply replaying an old authenticator. If clocks are out of skew, this authenticator will be rejected along with notice of the current time so that the client can retry (time isn't a secret, remember, but Windows keep clocks synchronized within the domain by default to optimize this protocol).

Once assured of Alice's identity, the KDC generates a new random proof key for Alice and Bob to use and pops it into a new ticket with Alice's name, just like before. The KDC encrypts the ticket with the service's master key (a hash of Bob's password) and sends the ticket back to Alice, along with a copy of the new proof key that she can decrypt.

Alice's workstation now caches this second ticket along with its proof key in Alice's logon session. Alice can use this to authenticate with Bob throughout the day (all Kerberos tickets have expiration times inside them; they are designed to last for a single workday).

To prove her identity to Bob, Alice creates a fresh authenticator, encrypts it with the proof key in Bob's ticket, and sends this along with the ticket to Bob. Bob decrypts the ticket and validates the authenticator just like the KDC did, then encrypts the timestamp in the authenticator with the proof key and sends it back to Alice as proof that he also knows the key. If you think about it for a little while, you should be able to convince yourself that Alice and Bob have just completed mutual authentication. Bob has proof that his peer is Alice, and Alice has proof that it's Bob.

Note   If this isn't clear, remember that Bob just proved to Alice that he knew the value of the key in the ticket she sent. Keep in mind that the ticket was encrypted with Bob's master key, so this tells Alice that her peer knows Bob's master key. This helps prevent an attacker from spoofing Bob's service.

Kerberos tickets are also used to convey authorization information from domain controllers to services. Domain controllers in Windows not only put the user's name in the service ticket, but also the complete list of domain groups in which the user is a member. So when Bob receives a ticket from Alice, he's effectively receiving a signed set of claims from the domain controller. One of those claims is Alice's identity, and the rest are a set of group claims.

In the spirit of readability and brevity, I've talked about Alice and Bob as if they were doing the work here. But it turns out that the platform provides this code packaged up in a security service provider (SSP), a DLL that both the client and server processes load. When you write a client or service, this DLL will be loaded automatically as long as you configure your code to use platform authentication. I'm sure many of you have enabled Windows Integrated Authentication in IIS, and you've never had to worry about decrypting Kerberos tickets!

Kerberos, Passwords, and Smart Cards

Traditional Kerberos relies on symmetric master keys, and in most "Kerberized" systems, including Windows, these keys are derived from passwords. But as I mentioned earlier in this article, passwords are a really bad source of entropy for cryptographic keys. Most humans simply cannot remember a password that is strong enough to survive offline dictionary or brute force attacks. Given that service tickets are encrypted using the service's master key, it's imperative that service account passwords are long and randomly generated. Since no human ever has to remember these passwords, there's no excuse not to use good ones. I recommend using a unique 20 character, randomly generated password for each service account you create. This will give you about 128 bits of entropy.

Note   The term symmetric means that the same key is used for encryption and decryption. This is different from public key cryptography, where asymmetric keys are used. In the latter case, the encryption and decryption keys are different.

But there is no way a human will remember a 20 character random password. To deal with the human side of things, Kerberos supports an extension called PKINIT (public key cryptography for initial authentication). This allows a client to present a certificate to the KDC and prove knowledge of her private key in order to receive a TGT. When you use smartcard login in Windows, this is what's happening under the covers.

A smart card contains your certificate and private key, along with executable code that can perform encryption and signing right there on the card, so your private key never even needs to be loaded into the workstation's memory. And because a PIN is required, if you lose the card, an attacker will only have a few guesses before the card essentially self-destructs. Smart cards are not a hundred percent tamper proof. They are only tamper resistant, but practically speaking this is usually good enough to simply slow down the attacker. This buys you enough time to revoke the lost certificate so the domain controller will no longer accept it. There's simply no comparison — smart card login is much more secure than using a simple password. The tradeoff is that it's also more complex to administer and deploy.

But as an application developer relying on platform authentication, you don't care. By the time the Kerberos ticket gets to your service, you aren't worried whether the client initially authenticated with a password or a smart card. All you care about is that you receive a signed set of claims from a trusted authority (a domain controller) that you can use to make authorization decisions.

This is exactly as it should be. You can leave security policy decisions to the IT staff where your application is deployed. One site might decide that passwords are acceptable. Another might want smart card login. Your application will function normally in either case, if you simply rely on platform authentication.

Certificates and Public Key Infrastructure (PKI)

Public key cryptography is a relative newcomer. Discovered in the mid-70s, these math-intensive cryptosystems allow one key to be used for encryption, and an entirely different key to be used for decryption. This simplifies key exchange because the public key isn't a secret. Only the private key needs to be kept hidden from prying eyes.

Say Bob generates an RSA public/private key pair (RSA is the most well-known public key algorithm, its name taken from the initials of the three inventors). As long as he keeps his private key to himself, he can publish his public key on his home page for all to see. In fact, many people use a tool called Pretty Good Privacy (PGP) and do exactly this. If Alice wants to send Bob a secret message, she can get a copy of his public key and encrypt a message, and now the corresponding private key must be used to decrypt that message.

Note   Technically Alice would generate a random symmetric key to encrypt the message because it's simply too expensive to encrypt bulk data with a public key. She would then encrypt this symmetric key with Bob's public key, and send the encrypted key along with the encrypted message. Bob can then unwind the message by decrypting the symmetric key and using that to decrypt the message. This is a given in any public key cryptosystem, but you'll often hear people talking about encrypting data with a public key, when they are actually implying the use of an underlying symmetric key.

It's also possible for Bob to use a private key as the encryption key to form what's called a digital signature. By hashing a message he wants to send to the world, and then encrypting that hash with his private key, anyone with Bob's public key can verify Bob's signature on the message.

Note   A cryptographic hash function is a little bit like a checksum: it takes a variable length message and computes a fixed length hash of that message, typically between 20 and 32 bytes long. For more info on hash functions in cryptography, see http://en.wikipedia.org/wiki/Cryptographic_hash_function.

If Alice receives a signed message from Bob, she can decrypt the signature with Bob's public key to obtain the hash that Bob calculated. She then hashes the message she received and compares her hash to the one in Bob's signature to decide if this is indeed the message that Bob's key was used to sign.

While public keys aren't secrets, they do require some protection. Imagine if a bad guy, say Mallory, is able to infiltrate Bob's Web site and replace the public key he is displaying with Mallory's public key. Now any secret messages Alice sends to Bob are actually encrypted so Mallory can read them! And any messages Bob signs can be altered and signed by Mallory, and Alice won't realize she's being fooled. The problem is that the public key Alice holds is Mallory's but she mistakenly believes that it's Bob's.

This is where certificates and Public Key Infrastructure (PKI) come into play. Think of a certificate as a little data structure that contains a name and a public key. This entire data structure is then signed by a third party, binding the name and key together. Using PKI, Bob would present his public key to Verisign or some other certificate authority (CA). Verisign performs whatever due diligence makes sense to verify Bob's identity, and then constructs a certificate that contains Bob's name and public key, with Verisign's signature binding the whole thing together.

When Alice goes to Bob's Web site and downloads his certificate, she can check Verisign's signature (Verisign and many other CAs have well-known public keys that are shipped with most operating systems). If Verisign's signature checks out, she checks with Verisign to see if Bob has revoked his key. If everything checks out, and the certificate has not yet expired (most certs are valid for one to five years), she can use her trust in Verisign to believe that she indeed has Bob's public key.

Note   In general, PKI allows for certificate chains, where a root CA issues certificates for intermediary CAs, who then sign certificates such as Bob's. These chains can be of arbitrary depth, with the idea being that you walk up the chain until you find an authority you trust. As long as Alice trusts at least one CA in the chain, she can use Bob's certificate with a degree of confidence.

Another way to think of a certificate is that it is a signed statement by a trusted third party that says, "This is Bob's public key". Or even better, you can think of it as a signed set of claims, a lot like a Kerberos ticket!


Secure Sockets Layer (SSL) and Transport Layer Security (TLS), which is really just the IETF-standardized version of SSL, are authentication protocols just like Kerberos. The big difference is that they depend on public key cryptography, certificates, and PKI.

SSL essentially allows Bob to prove his identity to Alice by presenting his certificate and proving that he knows the corresponding private key. This should feel pretty familiar if you read the section on Kerberos; the private key in this case is being used as the proof key. If mutual authentication is required, Alice can prove her identity to Bob by presenting a certificate along with proof that she knows her private key. Of course just like with Kerberos, it's the underlying SSL plumbing that's actually doing the work.

In business to business (B2B) scenarios, it's common for both parties to use a certificate to authenticate. But certificates are unwieldy for humans to manage. Unless stored on a smart card, they must be installed on the user's machine, where they suffer a distinct lack of mobility. A certificate installed on your machine at work isn't going to be very useful to you when you need to authenticate from home, while a smart card can be carried with you to and from work.

So in most business-to-customer (B2C) scenarios, while SSL might be used to authenticate the service to the customer via a certificate, the customer will often not have a certificate themselves. So to achieve mutual authentication, the service will ask the user for a password or another credential. In this case it's critical that the channel over which those credentials are sent is encrypted.

Channel Integrity and Confidentiality

Both SSL/TLS and Kerberos provide not only authentication services, but also integrity and confidentiality for the ensuing conversation. Once the initial authentication handshake is over, both parties end up with a symmetric key (often referred to as a session key) that can be used to derive keys for encryption and integrity protection. What this means is that under both SSL/TLS and Kerberos, you can (and should) configure your communication plumbing to sign and encrypt all data passing between you and your peer. The Windows Communication Framework (WCF) is configured to do this by default.

In the B2C scenario I mentioned earlier, this enables the common scenario where a Web application uses a form to authenticate the client. Whatever credentials the client sends (typically a user name and password), those credentials (and any cookie that is issued in response) can be protected via SSL/TLS.

Surfacing Authentication in Windows

So now that you've learned the basics of Kerberos and SSL/TLS, you might be wondering just how you go about using them! In the vast majority of cases, you can simply configure the application you are writing to use whichever option makes sense. For example, in IIS, you can configure a virtual directory to require SSL once you've set up an SSL certificate for your Web site. Once you've done that, you can then configure whether or not you require a certificate from your clients.

If you're using WCF, you can configure your service with a certificate and use SSL to communicate with it. You then have your choice of client credentials you will accept, from a simple password to a certificate or something more exotic like InfoCard credentials. Many other Microsoft technologies support SSL/TLS as well, such as SQL Server.

Kerberos support abounds in Windows. This is the built-in authentication you'll get in RPC and COM+. WCF supports Kerberos as well.

And in version 2.0 of the .NET Framework, a suite of secure stream classes were added that make it trivial to add SSL/TLS or Kerberos to any socket that you open up. If you're interested, check out the System.Net.Security.AuthenticatedStream class and its derivatives.

Where Are We?

Kerberos and SSL/TLS are the workhorses of authentication on the Windows platform. These protocols are industry standards and as such are constantly subject to a barrage of analysis by cryptographers around the world. You benefit in many ways from using these protocols:

  • Your infrastructure uses standard, well-analyzed cryptographic algorithms.
  • You aren't creating new identity silos.
  • You get single sign on for free.
  • Your application is easier to deploy and maintain.
  • Your application will be much more desirable to IT staff.

I strongly recommend that you resist the urge to roll your own authentication system for applications that you build. Write less code. Be more secure. Rely upon platform authentication.


Logon and Authentication Technologies

Authentication in ASP.NET: .NET Security Guidance


Authenticating a user answers the question, "Who are you?" The next question is, "What are you allowed to do?" This is called authorization, and while there are many ways of approaching it, the best solutions always involve some form of policy. By encapsulating your authorization strategy into a policy that an administrator can tweak as the business changes (rather than hardcoding it into your business logic) you're designing a more agile solution.

An authorization policy can be as simple as a set of groups, or as sophisticated as an Authorization Manager policy hosted in Active Directory containing business logic scripts that run when an authorization decision can't be made based on simple static rules. As with many security decisions, the simpler your policy is, the easier it will be to administer without mistakes that either open security holes or deny access to legitimate users.

This discussion assumes that you are using platform authentication as recommended in the previous section. This gives you the richest set of options for authorization today, and it all starts with Windows groups.


Every user account in Windows is a member of at least one group. Groups are the most natural place to start for an authorization strategy, because the basic concept of a group is already understood by administrators and developers alike. Groups act as a form of built-in security policy: generally the more groups you belong to, the more access you'll have.

As a concrete example, imagine you were building an application to help manage a pet store. The app must authorize users to perform a number of tasks, from managing payroll decisions to feeding the animals. A set of three groups might be all you need:

  • Managers
  • Staff
  • Customers

When a user wants to purchase a pet, the application verifies the user is a member of the Customers group. When a user wants to feed the animals, the application checks for the Staff group. Only members of the Managers group can give raises, and so on. The pet store installation program can add these groups automatically, and the help file tells the administrator what these groups mean, so he can add users to these groups based on his knowledge of this particular pet store.

This simple use of groups decouples the pet store application from its deployment. When deployed at Bob's Pet Shop, Bob will be the only member of the Managers group. But when at the Pet City SuperCenter, there will likely be several Managers. The pet store developers didn't have to worry about this though, because all they care about is whether or not the user is a member of the Managers group. The individual user's identity only comes into play when the application audits the user's actions.

Using groups reduces the code for the pet store. The devs didn't have to write an application for managing authorization policy, since the operating system already provides tools for managing groups. Another benefit is that system administrators already know how to manage groups, so no training is necessary.

In an enterprise scenario, it's likely that an administrator won't assign users directly to an application's groups, but rather he'll take advantage of group nesting in Active Directory to simply map existing groups in his organization onto the pet store's application groups. This technique is sometimes referred to as using account groups and resource groups. Managers, Staff, and Customers are considered Resource Groups, because they were defined by the pet store application, a resource that users will access. On the other hand, at the Pet City SuperCenter, the administrators have already designated a couple of groups for managing their staff members: Employees and Temps. The administrator places users in these groups when he creates their accounts in order to help categorize the accounts in the organization.

Figure 2. Authorization groups


The pet store example above is a special case of role-based access control (RBAC), where the app used Windows groups for its "roles." Managers, Staff, and Customers were the three logical roles that the pet store relied upon.

One of the earliest forms of RBAC on the Windows platform was introduced by Microsoft Transaction Server and later incorporated into COM+. This style of access control became very popular over time, and the idea of a generalized RBAC solution was eventually realized in a feature called Authorization Manager, or AzMan for short. AzMan provides a user interface that allows an administrator to manage your authorization policy, and a runtime that allows you to perform access checks against that policy.

Note   The AzMan runtime is installed automatically with Windows Server 2003 and Windows XP. The user interface for managing AzMan policies must be installed separately on Windows XP, via the Windows Server 2003 Administration Tools Pack.

AzMan solves a lot of the problems that plagued previous RBAC solutions. For example, since Windows groups are defined globally, not per application, you need to ensure that your group names don't collide with another application's groups. AzMan allows you to create a policy store in Active Directory, ADAM, or even in an XML file for quick and dirty prototyping. An AzMan store can hold one or more application-specific policies where you define roles that make sense for your application. AzMan roles can be defined recursively in terms of other roles, so they are much more flexible than COM+ roles, and AzMan isn't tied to COM+ or any other communication framework. You can use AzMan in any application.

With AzMan, the developer concentrates on defining operations, which are the basis for all access checks performed by the application at runtime. Roles are then defined in terms of operations. While the developer can define a set of roles up front, an administrator can also come in later on and add new roles, or tweak the definition of a role to suit changing business needs. AzMan roles can then be linked to Windows groups, which means you get the flexibility of the group-based solution I showed earlier, with the agility of an AzMan policy. It's the best of both worlds!

Figure 3. Authorization roles

In the figure above, I've shown three operations that the pet store needs to perform: feeding, showing, and selling pets. Any member of the Staff role is allowed to perform these operations. For this particular deployment, the groups Employees and Temps are mapped onto the Staff role. These group mappings are also part of AzMan policy.

There is a lot more to Authorization Manager that you can learn by reading the technical paper, Role-Based Access Control for Multi-tier Applications using Authorization Manager.

Discretionary Access Control

Role-based access control is almost always sufficient as an authorization solution for line-of-business applications. For other more generic systems, like the Windows file system, registry, or Active Directory itself, a more general solution is required. This is where discretionary access control comes in. Under this authorization strategy, each individual object has a discretionary access control list (DACL) that defines which users and groups are allowed to perform various operations on the object.

We use the term discretionary because each object also has an owner. If Alice creates a file, she owns that file. The owner has the inalienable right to read and write the DACL on her files to grant other users and groups whatever permissions she deems appropriate. Users are granted permissions to the object at the owner's discretion. While this works fine for a file system, it's not appropriate for most line-of-business applications, where the administrator controls (via policy) which users and groups are allowed to perform certain actions.

If you are writing an application that needs discretionary access control, Windows does support the notion of a private security descriptor, which allows you to assign an owner and DACL to each object you create, although this is a much more advanced programming technique than simply using AzMan!

Trust Models

When building connected systems, you'll often have a choice of where authorization should be performed. One option, the impersonation and delegation model, is where the first system simply passes the user's identity through to a second system, which performs an access check before carrying out the request. The other option is to have the first system perform the access check, only calling the second system if the access check succeeds. A third option is to combine these models.

Your choice of model is important. It can impact how quickly and deeply an attack can penetrate your defenses. It can also impact performance and scalability. To make this discussion concrete, let's take a very typical data-driven Web site and apply these authorization models. The tradeoffs will quickly become clear.

Trusted Subsystem Model

In the trusted subsystem model, the Web application performs an access check for any sensitive operation. If the check succeeds, the app uses its own identity to communicate with the database. The database has no idea who the actual user is; all the database sees is a request coming from the application, which in this example runs under a user account called WebApp. While the database can perform a bit of generic authorization, for example, preventing tables from being dropped, the really interesting authorization must happen in the Web application, where the user's identity is known (Alice).

Figure 4. Trusted subsystem model

This model has some great performance characteristics. Because the app only uses a single identity to connect to SQL Server, connection pooling kicks in. However, because SQL Server must trust the Web application to authorize the user, the Web app and database end up in a single trust unit. Imagine that a SQL injection vulnerability or buffer overrun in the Web app leads to its compromise. (For more information on SQL injection, buffer overruns, and other nasty vulnerabilities, see Writing Secure Code, Second Edition.) The attacker can immediately use any credentials the Web application holds to mount an attack on the database.

Note   Databases aren't the only place where connection pooling is helpful; connections to Active Directory through System.DirectoryServices are also pooled wherever possible.

Trust units are a lot like candy. They have a hard outer shell, but once that's cracked, the inside is soft and delicious!

Impersonation and Delegation Model

In this model, the Web application impersonates the original user before talking to the database. This way the database can see who the user really is (Alice) and is able to do much more effective authorization. In this case, the Web application itself need not have any permission to use the database at all! If the Web application is compromised, the attacker can impersonate incoming clients to attack the database, but presumably the privilege level of most clients will be lower, which will slow down the attack.

Another benefit (which in many cases is a requirement for compliance with government regulations) is that SQL Server can audit the actions of the original user. This only works because SQL Server can see the identity of the original user.

Figure 5. Impersonation and delegation model

This model also has its drawbacks. Can you see how Alice might be able to talk to the database directly, without going through the Web application? Her permission would be limited based on the authorization policy in SQL Server, but this may be a concern in some cases. Connection pooling isn't very effective if each request to the database comes from a different user. This impacts performance. And finally, unless constrained delegation is used, if the Web application is compromised, Alice's credentials might be used by an attacker to go after other systems on the network, not just SQL Server!

Note   Constrained delegation is a feature introduced in Windows Server 2003 that limits where delegated credentials may be used. For more information on this feature, see The .NET Developer's Guide to Windows Security.

Hybrid Model

Nothing says you have to pick only one of these models. You can use a combination of the two! For low-privilege, high-volume transactions, you may decide to use the Web application's identity to access the database, as in the trusted subsystem model. You'll get the benefit of connection pooling where you need it most. But for high-privilege, low-volume transactions, have SQL Server require the original caller's identity. The Web application should impersonate its caller before making a high-privilege request.

Think about how often you administer a Web application like this. If an administrator only logs into the Web application once a week, the hit of impersonating that administrator is negligible compared to the benefit you get by not allowing the Web application to use its own identity to perform the same high-privilege operations!

Where Are We?

Windows has rich support for authorization, so there's no need to roll your own authorization infrastructure. Whether you rely upon groups or use Authorization Manager, you should focus on separating authorization policy from your application's code. This leads to greater agility; an administrator can reconfigure authorization at any time to suit changing business needs.

Carefully consider the different authorization models discussed here. There are security and performance tradeoffs with each model that you should consider, and I've highlighted these tensions so you'll have an easier job making a decision in a given scenario.


Role-Based Access Control for Multi-tier Applications using Authorization Manager

Trust Technologies

Authorization and Access Control Technologies

Programming Identity-Enabled .NET Applications

The .NET Framework provides a lot of plumbing that makes the job of building identity-aware and directory-enabled applications much easier. Most people I talk to are surprised by just how few lines of code it actually takes to make an application identity aware.

In this section, I'll explain the Zen of identity in the .NET Framework. Then I'll show you to connect to Active Directory and look up data about your users (e-mail, phone numbers, and so on) that you can use in your application. Finally, I'll drill down into AzMan and show how to perform access checks based on AzMan policy.

Identity Abstractions in .NET

The .NET Framework defines two very simple interfaces for representing identity. These interfaces are simple enough to show here:

namespace System.Security.Principal
    public interface IIdentity
        bool   IsAuthenticated    { get; }
        string Name               { get; }
        string AuthenticationType { get; }
    public interface IPrincipal
        IIdentity Identity { get; }
        bool IsInRole(string role);

Factoring identity into two interfaces like this turns out to be a wise move. Think of it this way: IIdentity represents the results of authentication, answering the question, "Who are you?" IPrincipal represents the results of authorization, answering the question, "What can you do?" It turns out that in some cases you don't end up answering both questions using the same infrastructure. One example of where this is true is in ASP.NET 2.0 using forms authentication. In this case the identity of the user is supplied by a membership provider, while the roles are supplied by a role provider.

The simplest implementation of these interfaces is supplied by the GenericIdentity and GenericPrincipal classes. If you're rolling your own authentication and authorization infrastructure, you can use these classes to represent identities and roles, but as you've hopefully learned from this paper, you're much better off relying on platform authentication wherever possible.

In the earlier section on authorization, I introduced the notion of roles. Calling IPrincipal.IsInRole is the most direct way to test whether the user is a member of a role. At this level of abstraction, your application doesn't really need to care whether the role is implemented by a Windows group, an AzMan role, or something else entirely. This allows you to write code that performs authorization without worrying about the particular authorization mechanism in use.

WindowsIdentity and WindowsPrincipal

If you're using platform authentication, these are the classes that will represent users with Windows accounts. The WindowsIdentity class wraps itself around a Windows access token, which is what you get when you establish a logon through platform authentication. Inside the token are the user's name, security identifier (SID), along with SIDs for each group in which the user is a member. The WindowsPrincipal class simply allows you to test whether the user is a member of a group. When you call IsInRole in this case, you'll want to specify a fully qualified group name, such as "MyDomain\MyGroup."

The first time you find yourself calling IsInRole and hardcoding a domain or machine name into your code, you'll likely feel a bit unclean. If so, that's a good instinct: you shouldn't be hardcoding deployment details like that into your apps. There are a number of ways you can factor out these deployment details. For example, you could use AzMan to define logical roles that the administrator can map onto groups at deployment time, and then define an AzManPrincipal class that implements IsInRole by checking AzMan roles. Then, after authenticating the user, toss the WindowsPrincipal you get and replace it with an instance of your AzManPrincipal class. This is a great example of how useful it is to have IIdentity and IPrincipal as distinct interfaces!

Thread.CurrentPrincipal and PrincipalPermission

The developers of the .NET Framework knew that you would most likely rely on plumbing in the framework or platform to do the heavy lifting of authenticating the user and obtaining her authorization attributes such as groups or roles. So they built in an obvious mechanism for plumbing such as ASP.NET and WCF to hand off the authenticated user's identity to application code.

The Thread class exposes a static property (Thread.CurrentPrincipal) that authentication plumbing sets and that your application can later retrieve. This gives you the IPrincipal representing the user, from which you can get the corresponding IIdentity, determine if the user is authenticated, get her name, check roles, etc. This even works in multithreaded environments; each logical thread of execution will have its own principal.

If you prefer declarative programming, you can use the PrincipalPermission attribute to perform these checks. This allows you to simply declare the authorization needs of any method, and at JIT time, the compiler will emit checks against Thread.CurrentPrincipal at the beginning of the method to ensure that your demands are met. If the user doesn't satisfy the criteria you demand, an exception will be thrown and the method will not be executed. Here's an example:

using System.Security.Permissions;

class InvoiceManager {
    [PrincipalPermission(SecurityAction.Demand, Authenticated=true)]
    void Submit(Invoice invoice)  { ... }
    [PrincipalPermission(SecurityAction.Demand, Role="Manager")]
    void Approve(Invoice invoice) { ... }
    [PrincipalPermission(SecurityAction.Demand, Role="Accounting")]
    void Pay(Invoice invoice) { ... }

In this case, any authenticated user can submit an invoice, but she must be in the Manager role to approve invoices, and in the Accounting role to pay them. This is a great way to self-document the authorization requirements of a class, because these attributes can later be extracted with tools like PERMVIEW.EXE, or programmatically through the CLR's metadata interface.

Programming Directory Services

Once you've got a WindowsIdentity for a user, you can use the classes in the System.DirectoryServices namespace to quickly do a lookup on his user account in Active Directory and discover all sorts of information about him:

  • What's his phone number or e-mail address?
  • What office is he in?
  • Who is his manager?
  • Who are his direct reports?
  • What's his employee ID number?

You might expect the code for doing this to be really complicated, but it's surprisingly easy:

public string LookupEmail(WindowsIdentity id) {
    using (DirectoryEntry user = LookupUser(id)) {
        return (string)user.Properties["mail"].Value;
DirectoryEntry LookupUser(WindowsIdentity id) {
    string path = string.Format("LDAP://<SID={0}>", id.User);
    return new DirectoryEntry(path, null, null, SECURE_BINDING);
AuthenticationTypes SECURE_BINDING =
    AuthenticationTypes.Secure  | // use platform authentication
    AuthenticationTypes.Signing | // use integrity protection
    AuthenticationTypes.Sealing;  // use encryption

The trick is to use a technique called SID binding, where you get the user's security identifier (SID) from the WindowsIdentity.User property, which was introduced in version 2.0 of the .NET Framework. Then you hand that value off to Active Directory and boom, you've got a DirectoryEntry object that represents the user's account. This example looks up the user's e-mail address, but it would be just as easy to get the other properties listed above.

As you learn more about programming directory services, you should also spend some time familiarizing yourself with the schema that Active Directory provides by default. There is a ton of great information here that you can harvest for use in your applications, and the system administrator will be very happy that you're relying on a single source of truth for user data instead of creating yet another identity silo to store these sorts of details! The schema is documented in MSDN.

What about independent software vendors building applications that will be deployed at many different companies? Not all of those companies have Active Directory, you may be thinking. Here's the thing: most companies actually do run Active Directory, and of those that don't, most larger companies store user accounts in a directory service accessible via LDAP. It's a very good idea to build solutions that leverage those identity stores instead of building your own identity silo. If you end up in a situation where there is simply no directory service at all, you should consider deploying ADAM to store user accounts, which allows you to use System.DirectoryServices with the same schema you'd expect in Active Directory. The binding technique you'll use will be different in this case, but the overall programming model is the same.

ADAM is an interesting topic in itself. This is a full-fledged LDAP directory service based on the Active Directory code base, but without all the Network Operating System (NOS) infrastructure that you probably don't care about if you simply need an LDAP enterprise or application directory. It has the same replication features as Active Directory and can run on any machine (not just a domain controller). I'm running it as I write this paper on my Windows XP laptop, which makes it easy for me to program with System.DirectoryServices without having a domain controller around! It's easy to install, as you can see from this paper.

For more information on programming System.DirectoryServices, see The .NET Developer's Guide to Directory Services Programming.

Programming Authorization Manager (AzMan)

AzMan exposes a rich programming model that allows you to do anything the GUI can do. You can define entire applications, including roles, operations, and so on, programmatically. But the vast majority of applications shouldn't need to worry about all of that, as the Microsoft Management Console (MMC) snap in for AzMan already provides an easy-to-use interface to do all of these things.

As a developer, there is one AzMan object that you'll really care about, and that's the client context object. This object implements IAzClientContext, and it's what tells AzMan about your client so AzMan can determine what roles the client is in and therefore what operations she is entitled to perform. In order to get one of these objects, you'll need to first connect to an AzMan policy store and open up an application within that store. Applications in AzMan policy allow you to define roles and operations that are specific to your application.

Back when I discussed authorization and roles, I showed a picture of how Windows groups can map onto AzMan roles, which then map onto the operations that your program cares about. In that example, I showed how Alice, who is a member of the Employees group, gets mapped by AzMan onto the Staff role in the Pet Store application. By virtue of being in the Staff role, Alice is granted access to several operations: feeding, showing, and selling pets. IAzClientContext is how your application lets AzMan know what groups Alice is in, and is also how AzMan tells you which operations Alice can perform.

There are a number of ways to create an AzMan client context object. The simplest is when you use platform authentication and you've got a WindowsIdentity for your client. Remember that WindowsIdentity is really just a wrapper around a Windows access token. It's that token that holds the user and group SIDs, and that's what AzMan needs to construct a client context object.

Below is a sample class that connects to an AzMan store and creates client context objects for an app based on WindowsIdentity objects. Using a wrapper class like this is a good idea, as it can help reduce the surface area of AzMan to expose only the features that your team needs.

using System.Security.Principal;
using Microsoft.Interop.Security.AzRoles;

public class AzManApplication {
    IAzAuthorizationStore store = new AzAuthorizationStoreClass();
    IAzApplication app;
    public AzManApplication(string connectionString, string appName) {
        store.Initialize(0, connectionString, null);
        app = store.OpenApplication(appName, null);
    public AzManClientContext CreateClientContext(WindowsIdentity user) { 
        IAzClientContext ctx =
                (ulong)user.Token.ToInt64(), null);
        return new AzManClientContext(ctx);

The connection string for an AzMan policy store takes two forms, depending on whether you're storing your policy in an XML file (great for prototyping) or in AD/ADAM (best for production systems):

Sample XML connection string: msxml://c:\temp\mystore.xml

Sample AD connection string: msldap://cn=MyStore,cn=Program Data,dc=Fabrikam,dc=com

Sample ADAM connection string: msldap://servername:port/cn=MyStore, cn=Program Data,dc=Fabrikam,dc=com

Once you have opened an AzMan application, you're ready to authorize clients, and to do that you need an AzMan client context object. The wrapper class above shows how to construct an AzMan client context object from a WindowsIdentity. Note that it's the underlying Windows access token that's used by AzMan to discover the user's groups, which then map onto AzMan roles and operations.

Once you have a client context object for the user, you should demand that the user has permission to perform whatever operation she's requesting. You do this by calling AccessCheck on the client context, as shown by the wrapper class below. This function is a bit tricky to call, which is why it's good to have a helper class like the one shown here.

public class AzManClientContext {
    IAzClientContext ctx;
    public AzManClientContext(IAzClientContext ctx) {
        this.ctx = ctx;
    public void DemandOperation(int opNum) {
        object[] operations = { opNum };
        object[] results = (object[])ctx.AccessCheck("",
            null, operations, null, null, null, null, null);
        int result = (int)results[0];
        if (0 != result) {
            throw new Exception("Access Denied");

Both ASP.NET and WCF support AzMan through the role provider architecture that was introduced in ASP.NET 2.0. This architecture only leverages roles though; it doesn't use the role-to-operation mapping that I've been talking about in this article. But it is also very simple to use, and doesn't require you to ever call into AzMan yourself. Essentially the way this works is that an ASP.NET authorization module runs for every request, mapping the user onto an AzMan client context object, and then reads the roles from that object and populates an IPrincipal that your application can use to look for AzMan roles. The big benefit here is that you don't have to rely on groups directly, which was one of the big wins with roles that I discussed in the Authorization section of this paper. To learn more about this feature, read How To: Use Authorization Manager (AzMan) with ASP.NET 2.0 from the Patterns & Practices group.

I've really only scratched the surface of AzMan's capabilities in this paper, but most applications don't need much more than the features I've demonstrated here! Simply factoring out authorization decisions into an AzMan policy store will make your application much easier to maintain over the years.

Where Are We?

The .NET Framework has wisely abstracted identity into a couple of generic interfaces. IIdentity represents the results of authentication, and IPrincipal couples that to a set of authorization attributes generically called roles. If you rely on platform authentication, you'll end up with implementations of these interfaces based on Windows access tokens, and the roles will be the groups from the token.

Thread.CurrentPrincipal is the way plumbing communicates identity to applications. PrincipalPermission is a great way to leverage this feature and build self-documenting code. And if you've got a WindowsIdentity for your user, you can easily call into Active Directory to get even more information about that user, such as her e-mail address or phone number.

While it's possible to program role-based access checks using Windows groups directly, you can run into problems because group names include domain or machines names, and it's bad to hardcode those details. Using AzMan to map groups onto roles and roles onto operations gives you much more flexibility at very little cost, and is the best way today to manage your authorization policy. Consider building a few wrapper classes like the ones I showed earlier to simplify your communication with AzMan. You should also seriously consider storing your AzMan policy in Active Directory or ADAM when you deploy your application.


Active Directory Schema

How To: Use Authorization Manager (AzMan) with ASP.NET 2.0

Bundling ADAM with your applications

Federated Identity

Early in this paper I encouraged you to avoid creating identity silos. Each application that has its own private user store simply adds to the burden on IT and helpdesk staff, as well as end users who have to remember yet another set of credentials. Moving toward integrated Windows authentication is a great way to avoid creating another silo. But there's yet another step you can take to reduce these costs even further: support federated identity.

The Problem

Consider two organizations that want to partner with each other. Perhaps this is a supply chain partnership, where employees from a manufacturing company need to access Web applications and services exposed by a supplier. How should the supplier authenticate employees from the manufacturing company? They don't share a Windows domain, and they aren't in the same Active Directory forest. Furthermore, one partner may be using Active Directory while the other may be using a completely different platform!

In the past, the solution was to issue credentials to the employees of the manufacturing company that need to access the supplier's resources. But this results in yet another set of credentials for the employee to manage, and another user account for the IT staff of the supplier to provision. These accounts typically come with a certificate as well, which means the supplier needs to maintain a public key infrastructure and issue client certificates.

Figure 6. Example of limited authentication

Authorization is also troublesome. When an employee at the manufacturing company is promoted, she may need a different level of privilege when she visits the supplier's Web site. This means someone has to remember to call the IT staff at the supplier and have them make these policy changes. Wouldn't this all be easier if the manufacturing company could do this themselves?

Introducing Federated Identity

In the example I just provided, there's already a level of trust present between the supplier and the manufacturer. The supplier has to trust the manufacturer to say which employees should be allowed to use the supplier's Web applications and services, and what level of privilege each employee should be granted. Federated identity automates this trust using the WS-* suite of protocols. When these two organizations decide to federate, there is no longer any need for the supplier to create user accounts for the manufacturer's employees. Instead, the supplier's Web applications and services will communicate using WS-Federation with the manufacturer automatically, allowing the manufacturer to authenticate its own employees.

Figure 7. Example of federated identity

Federated identity also simplifies authorization. As you'll see shortly, the same security token that carries the user's identity can also carry authorization claims. Indeed, a token may not have any identity claims at all, serving only to indicate that the user is authorized for some action without identifying who that user is. With an integrated federation solution such as Active Directory Federation Services (ADFS), as soon as an employee is promoted and her user account in Active Directory is updated to reflect these changes, the claims sent to partner organizations about her will immediately change to reflect her new status. The supplier will then be able to authorize her to access resources that make sense for her new job, without having to change its security policy.

In a nutshell, federated identity is all about automating existing trust relationships and thereby reducing costs. Let's start exploring this concept further by looking at a concrete implementation of WS-Federation called Active Directory Federation Services (ADFS).

Active Directory Federation Services (ADFS)

ADFS was released in December 2005 with Windows Server 2003 R2. This is a product that makes it easy for organizations using Active Directory to federate with partner organizations that have compatible implementations of the WS-Federation passive profile. If the partner org is running Active Directory, they can also install ADFS in order to support federated identity. Otherwise the partner must be using a compatible product such as Tivoli Federated Identity Manager from IBM.

Note   WS-Federation specifies both an active and passive profile. The difference between the two is simply the type of client software in use. Passive profile is where the client uses a Web browser to access resources, where we can't make too many assumptions: we assume the browser supports cookies and SSL, but not much more than that. The active profile assumes a much smarter client application, one that can perform cryptographic operations and thus improve the efficiency of the system considerably.

For this discussion, imagine that the manufacturer and supplier in my previous example are both using Active Directory with ADFS. In ADFS terminology, the manufacturer is called the account partner, because in this particular relationship, this is where the users and the user accounts live. The supplier is called the resource partner, because this is where the Web applications (the resources) are hosted.

Note   Any organization using ADFS can play both roles simultaneously; perhaps the manufacturer also acts as a resource partner to its dealerships, for example.

Like any well-designed security service, ADFS strives to be as transparent as possible. Once both organizations have added each other to their corresponding ADFS trust policies, employees from the manufacturer can surf to Web applications hosted by the supplier without having to provide a second set of credentials. There may be a bit of flashing the first time as the employee's browser is redirected between the target Web app and the resource and account federation servers, but the end result is a seamless single-sign on experience for the user.

Figure 8. Active Directory Federation Services (ADFS)

Here's what happens behind the scenes:

  1. The employee from the manufacturing company points her browser to a Web app at the supplier company. The ADFS single-sign on agent in the Web application notices that the request arrived without an ADFS cookie, and so redirects the client's browser to the supplier's federation server.
    Note   The ADFS SSO agent is an HttpModule that you install in applications that want to support federated identity via ADFS.
  2. The supplier's federation server determines which account partner this request is for (there's only one partner in this case, but this step may require a hint from the user, perhaps by asking the user which organization she belongs to). It then redirects the client's browser to the manufacturer's federation server.
  3. The manufacturer's federation server challenges the client to prove her identity via integrated Windows authentication.
    Note   There are other alternatives for authenticating the client (forms-based login or a client certificate, for example) but integrated authentication is recommended for all the reasons I have been talking about in this paper, not the least of which is single-sign on for the user.
  4. The user is authenticated using a domain account in Active Directory.
  5. If the user is successfully authenticated, the federation server issues a SAML token containing a set of claims that the partner organization will understand. I'll talk more about what these claims look like shortly. The token is serialized into the URL as the client's browser is redirected back to the supplier's federation server.
    Note   SAML == Security Assertion Markup Language. A token consists of XML that includes a set of claims, signed by the issuer of the token.
  6. The supplier's federation server reads the SAML token, verifies that it was signed by a valid partner, and issues a cookie containing a SAML token signed by the federation server. This token will contain a subset of the claims received from the manufacturer's federation server. The client's browser is now redirected back to the original Web application she was trying to access in the first place.
  7. The Web single-sign on agent reads the cookie, verifies the signature on the SAML token from the supplier's federation server, and makes the claims available to the Web application via a class called SingleSignOnIdentity (this class implements IIdentity and is available as you'd expect via the HttpContext.User and Thread.CurrentPrincipal properties).

Virtually all of the work I've described in these steps is handled exclusively by ADFS and Active Directory, which means as a Web application developer, your main job is to simply make authorization decisions based on the claims provided with the request. This can be as easy as making a few calls to IPrincipal.IsInRole and perhaps auditing the value of the IIdentity.Name property, or it can be as robust as loading roles into an AzMan security context based on group claims provided via the SingleSignOnIdentity object.

Claims, Transformations, and Security Token Services

At this point you might be wondering just what a claim is and where it comes from. In ADFS, claims naturally originate from Active Directory. For example, the user principal name, alice@fabrikam.com, is an identity claim that identifies the user and can be used to audit her actions. ADFS also supports group claims, and can be configured to emit a claim for a user if she is a member of a particular Active Directory group. And finally, ADFS supports custom claims that are simply name-value pairs. These claims are most naturally taken from properties of the user object in Active Directory. You can configure ADFS to extract these custom claims directly from Active Directory user properties; for example, the user's phone number could be exposed as a claim. Other systems besides ADFS might support other types of claims, but each claim is identified by a URI, which helps everyone to agree on what each claim means.

Imagine that the manufacturing company in my example has a group called InventoryManagers, and members of that group need to be authorized to use a parts ordering form in the supplier's Web application. The supplier doesn't necessarily have a group with that same name. This is where claims transformation comes in: we need to be able to convert the claims that one system knows about into claims that a federated partner understands. Perhaps in the supplier's Web app, users that are authorized to access the parts ordering form must present a group claim named OrderClerks. In this case, we'd want the ADFS federation server at the manufacturing company to emit an OrderClerks group claim for anyone in the InventoryManagers group. This is the sort of mapping you can configure in ADFS when you federate with a new partner. And if this sort of static mapping isn't sufficient, you can always compile and register a claims transformation module, which is just a .NET assembly containing a class that implements IClaimTransform. This allows you to inspect the claims provided by ADFS and add or remove claims as you see fit.

As you can probably imagine, claims transformation is really important to federation. Every company has its own "language" for representing identity and authorization information such as groups. By allowing the account partner to supply these claims directly, the resource partner is always getting the most current information. If an employee is added or removed from a group, the next time she logs in, all the resource partners she accesses will see the change.

Claims transformation is so important that there's actually a name for a service that performs these transformations: it's called a Security Token Service, or STS for short. An STS takes a signed set of claims as input and produces a signed set of claims as output. The federation server that's part of ADFS (and that I showed earlier in my ADFS diagram) is an example of an STS. The manufacturing company's federation server (STS) takes as input a Kerberos ticket from the user's browser and converts that into a SAML token that the supplier can understand. That's really just an example of a claim transformation. The Kerberos ticket is a set of claims signed by Active Directory, while the SAML token is a set of claims signed by the federation server. In fact, generalizing this idea of claims across security token technologies in this way leads to a very powerful concept, the identity metasystem.

The Identity Metasystem

Kerberos tickets, X.509 certificates, and SAML tokens, oh my! These (and many other) technologies are used to communicate identity information between systems, much like Ethernet, Token Ring, and other physical networks carry signals between computers. But it wasn't until the world agreed on TCP/IP that the Internet as we know it today was able to blossom. There had to be some encapsulating protocol that allowed us to bridge the gap between your Token Ring network and my Ethernet network before we could communicate without friction.

Can you imagine a future where authenticating a user would be as easy as connecting to a Web server? Oh, the innovation that would be possible in such a world! Of course we must temper our excitement by remembering that for every great feature we design, there's a bad guy out there trying to figure out a way to exploit it. Unlike with TCP/IP, we're considering this earlier rather than later.

In order to promote a system that respects the individual's right to privacy, a set of laws has been proposed by Kim Cameron. These seven laws of identity (http://www.identityblog.com/?page_id=354) have emerged after much consideration by a broad coalition of leading identity thinkers who have taken part in the discussion on Kim's blog at www.identityblog.com. The vast majority of these people aren't Microsoft employees, either (take for example Doc Searls, the senior editor of Linux Journal). The emerging opinion is that an identity system cannot be stable, that is, it will fail or simply be unacceptable to individuals if it breaks any of these laws.

Interestingly enough, one of these laws states that there is no one identity technology or operator that will satisfy everyone's needs in all contexts, which is why we need a metasystem. An identity metasystem is a framework into which we can plug identity technologies such as Kerberos tickets, X.509 certificates, SAML tokens, and whatever other identity representations that you can describe with ones and zeros. As long as it is a technology that makes sense for both end users as well as Internet applications that serve those users, there will be a way to plug it into this metasystem. It's interesting to note that Microsoft Passport doesn't satisfy the seven laws by itself (it works great with Microsoft properties such as MSN and Hotmail but isn't something I want to use when I talk to my bank!). However, Passport would be a perfectly natural addition to a portfolio of identities under this metasystem. In this world, Passport simply becomes one of many identity providers; there simply cannot be one identity provider to rule them all!

Before the Internet could really take off, participants had to agree on a few basic protocols: TCP/IP for transmitting packets around, DNS for discovering TCP/IP endpoints, and so on. The same goes for the identity metasystem. There need to be some basic protocols that we all agree upon in order to make the metasystem real. The Microsoft proposal is based on the WS-* suite of Web service specifications. For example, WS-Security is used to define XML representations of security tokens and secure their transport between Web endpoints, while WS-Trust is used for claims transformation. WS-MetadataExchange and WS-Policy are used for discovery, helping to negotiate the type of token and claims required by the parties who need to authenticate with each other.

In the identity metasystem, there are three kinds of players: subjects (entities such as users or even devices that need to access Internet endpoints), identity providers (such as Verisign, many of whom already provide identity services for the Web), and relying parties (Internet endpoints that need to authenticate subjects or otherwise consume identity information). But in order to complete the story of how these three interact, I need to introduce the last piece of the puzzle, a part of the identity metasystem currently code named InfoCard.


Two of the laws of identity deal with the way humans interact with the metasystem. One states that the human needs to be an integral part of the security protocol, which is clearly necessary given the plethora of phishing and other social engineering attacks common at the time of this writing. The other law I'm thinking of states that the human interface to the metasystem should be consistent and feel real no matter what underlying technology is being used to represent identity information.

InfoCard is the current code name for a system that Microsoft is using to surface the metasystem to humans, and it's been designed to satisfy these (and the other) laws of identity. The most visible piece of InfoCard is the identity selector, which helps a user organize digital identities as a set of cards, which look just like the plastic cards you keep in your pocket. Computer users are already familiar with credit cards, library cards, driver's licenses, and so on, so modeling a digital identity as a card is a great way to make the rather abstract concept of identity more concrete.

Each card has a data structure behind it that contains all the metadata needed to help the user choose a card in a particular context. For example, if the user surfs to a Web site that accepts SAML tokens issued by Verisign, Thawte, or GE, the identity selector will pop up and give the user a chance to pick a card to use with the Web site. But only cards that are based on SAML tokens issued by Verisign, Thawte, or GE will be lit up when the identity selector pops up in this context. After the user selects a card, the identity selector will "dereference" that card, contacting the identity provider and obtaining the actual claims that the card represents (which the user gets to approve, by the way, before sending to the Web site). In short, cards contain enough information to help the user make a choice, but the identity provider is actually the one holding the user's personal information that's represented by the card. The identity provider acts as an STS, issuing security tokens containing the desired claims. Note that there is a local identity provider that ships with InfoCard, allowing you to create self-issued cards, although the personal information allowed in these cards is very limited given that it will be stored on your hard drive.

One way InfoCard advances the human integration story is by making use of a feature of X.509 certificates called logotypes (RFC 3709). This allows the certificate's subject and issuer to be represented not only by a name, but also by an image that can be shown to a human. This will help to encourage humans to verify the identity of the sites they are visiting before submitting personal information to those sites!

InfoCard and the identity metasystem as a whole put the user at the center. Nothing stops me from creating cards that have false information to give to sites that I really don't trust (but require me to supply personal information to get the service I need). And I can have as many cards as I want to: there's no need to try to have a single card that works everywhere.

The picture below shows how the metasystem brings together subjects, identity providers, and relying parties.

Figure 9. Identity metasystem

Programming a Web service to accept tokens from the identity metasystem (in other words, to play the role of a relying party) is easy on this platform if you use WCF to build your Web service, as you'll see later.

The identity selector is installed into the operating system as a privileged service. It will first ship with Windows Vista, but will also be installed automatically wherever WinFX is installed. WinFX can be installed on Windows XP and Windows Server 2003, so these platforms will also have access to an identity selector. Microsoft has even offered to provide advice to others for building similar selectors into competing operating systems, because the identity metasystem will only succeed if there is broad adoption.

Where Are We?

Federated identity helps to eliminate yet another layer of identity bloat by automating existing trust relationships between partners. This can significantly reduce the cost of deploying and maintaining an application used by partner organizations. Supporting federated identity means building identity and claims aware applications, which is pretty easy using ADFS (or WCF as you'll see shortly).

The identity metasystem and InfoCard is all about extending the notion of federation and claims-based identity to the Internet as a whole. Allowing the user to choose her own identity technologies and providers, and putting the user at the center, the identity metasystem follows the broadly accepted seven laws of identity and therefore has a very good chance of becoming the identity layer of the Internet.


To learn more about InfoCard and the identity metasystem from a developer's perspective, check out the following articles:

Security in the Windows Communication Foundation (WCF)

WCF is the modern communication stack for the Windows platform, and it was designed from the ground up with security in mind. This section will help you understand the security features you should expect from WCF and your responsibilities as you develop secure clients and services using WCF.

Security Goals

WCF has a lot of plumbing to help you build secure connected systems. Here are some of the most important security goals that WCF strives to help you achieve:


WCF uses modern ciphers like AES, defaulting to key lengths that are recommended by conservative cryptographers like Ferguson and Schneier in their book, Practical Cryptography. More importantly, encryption keys are exchanged using modern authentication algorithms such as Kerberos and SSL/TLS. Encryption of message bodies is turned on by default with all standard bindings except the basic profile.


WCF messages are signed by default. Coupled with replay detection, this helps ensure that the message your service receives actually came from the client you think it did. If the message was tampered with on the wire, you won't see the message because the WCF security plumbing will reject it at the door. Bodies and most headers are signed by default with all standard bindings except the basic profile.


There's not much point in exchanging signed and encrypted messages if you don't know who is on the other end of the wire! WCF performs mutual authentication by default with all standard bindings except the basic profile. WCF prefers to use mature authentication protocols such as Kerberos and SSL/TLS wherever possible.


While WCF doesn't provide a lot of infrastructure for actually implementing authorization, it does provide the hooks you need to wire up to your favorite authorization vehicle, such as AzMan or the ASP.NET role providers.


WCF audits many security-sensitive operations, and you can enable auditing using the ServiceSecurityAudit behavior. Audit output goes to the event log, and you can direct the logged events into either the Security or Application log.

Security and Bindings

A binding in WCF determines what plumbing sits between the wire and your code in a WCF service or client. There's been a lot of blood, sweat, and tears put into this plumbing so that all you need to do is tell WCF what your service's security needs are, and it will do the heavy lifting for you. The most common way to configure bindings is to simply pick from a suite of standard bindings. Here is a table outlining the most commonly used bindings:

Binding Comments
basicHttpBinding This is the WS-I basic profile binding. It's not secure by default but can be run over SSL/TLS, which is a common practice today for securing Web services that conform to the basic profile.
wsHttpBinding This is the binding you'll use if you want to support the WS-* suite of protocols. Secure by default using message-mode security with Windows credentials.
wsDualHttpBinding Similar to wsHttpBinding as far as security is concerned.
netTcpBinding This is the binding you'll use for raw speed where cross-platform interop isn't desired. Secure by default using built-in transport security with Windows credentials.

Once you choose a standard binding, you can either use the default security settings for the binding as I've described above, or you can tweak those settings to suit your needs. If you decide to make changes, there are two major decisions you'll need to make. Even advanced WCF programmers creating their own custom bindings need to make these same decisions.

Step 1: Choose a Security Mode


A security mode of None means you don't want to bother with authentication, integrity, or confidentiality. Clients and services using this mode will be vulnerable to a whole slew of nasty attacks (such as client impersonation, service spoofing, eavesdropping, and so on), so be sure to do some serious threat modeling before you pick this mode!


Transport security mode is a simple, mature, and high performance choice, and is commonly used in interop scenarios (think HTTPS).

If you're using an HTTP binding, you'll need to configure SSL/TLS outside of WCF (typically in IIS, for example) to secure the channel. On the other hand, WCF implements transport security for the netTcpBinding intrinsically (via Windows integrated authentication). Transport security is great for point-to-point services where there are no intermediaries such as routers.


Message security mode is infinitely flexible. It uses the WS-* suite of protocols (WS-Security, WS-SecureConversation, and WS-Trust, for starters). These protocols allow a wide variety of client credentials, from X.509 certificates and Kerberos tokens to signed claim sets issued by security token services (an InfoCard identity provider, for example). You can design sophisticated systems with these protocols. For example, since WS-Security supports multiple security tokens for any message, there's nothing stopping you from building dynamic message routing where not only the original client but also the router provides proof of identity to the service. This is really cool!

The tradeoff for this level of flexibility is that message security isn't as mature as transport security, and you'll likely not have as easy a time interoperating with non-WCF clients/services if you use it. As of this writing, you simply can't beat a basic profile Web service running over HTTPS for interoperability, but going forward this will be less of an issue as more vendors implement these protocols and more cryptographers analyze them for weaknesses.

Step 2: Choose the Type of Client Credential

Both message and transport security plumbing in WCF are configured primarily based on the type of credential you expect the client to present. Your choice here will dictate not only the shape of credential WCF will demand of your clients, but also the shape of the credential your service must use. This choice also determines the authentication protocol that will be used. There are five client credential types in WCF, and they are represented by the ClientCredentialType enumeration:


This is the simplest client credential: none at all! In this case the client will be anonymous, but the service will be authenticated with a certificate. This is similar to what you see when you visit an Internet store: you're browsing as an anonymous user, but when you browse over HTTPS, the Web site identifies itself to you via an SSL certificate. It's important to authenticate services to clients so the client knows the service isn't being spoofed. (Remember, clients often send sensitive information to services, such as credit card numbers and passwords!)

User name

Here the service uses a certificate to identify itself to clients, and clients supply a user name and password to the service. By default WCF looks for a matching Windows user account and attempts to establish a logon using the supplied password. If the user doesn't supply the correct password for the account, the request will be denied by the WCF security plumbing.

If you don't maintain Windows user accounts for clients, you can supply a class that authenticates against a user store of your choice, or you can wire up an ASP.NET membership provider (such as the SQL provider that comes with ASP.NET) to authenticate your users.


Select this option if you want your users to authenticate using their Windows single-sign on credentials. Both client and server will use their Windows credentials to authenticate. This is by far the best choice for Intranet applications in a domain environment. You don't have to hassle with certificate deployment or custom user names and passwords, and you can easily leverage groups or AzMan for authorization. Start here if at all possible — it's really a no-brainer.


In this case, both the service and client have their own certificates, which they present to each other in order to authenticate. As of this writing, this option is popular for building business-to-business solutions where each side identifies itself using a certificate. In the future, many of these solutions will use federation instead, which is what the next credential type is all about.


Select this option if you want your service to support claim sets such as those supplied by the security token service (STS) I described earlier with the identity metasystem and InfoCard. Your service will identify itself to clients via a certificate, while clients will present a security token issued from an STS. You'll need to support claims-based authorization, but you'll have incredibly broad reach!

You Can Have More Than One Binding!

One of the great things about the WCF architecture is that you can expose a service contract on several different endpoints. For example, you might want a service to be available not only to employees of the organization but also to external business partners. To support local employees, expose one endpoint that supports Windows credentials, bestowing upon employees a seamless single-sign on experience. To support external partners, expose a second endpoint with a more Internet-friendly clientCredentialType such as Certificate or IssuedToken, depending on your needs. And if you are careful to separate your business logic from your authorization logic, you can have a single implementation of the service that supports both endpoints simultaneously!

Authorization Strategies

No matter what authorization strategy you choose, remember the advice I gave in the section on Authorization: do your best to factor out authorization policy from your business logic. Authorization rules often need to be flexible and configurable by an administrator. Hardcoded rules are really tough to change!

Depending on the shape of client credential you choose, authorization can be as simple as grabbing the client's WindowsIdentity and impersonating it before accessing resources, or as complicated as building a claims-based authorization policy complete with a user interface that allows an administrator to tweak the policy at run time.

For example, when your client supplies a Windows credential, you not only have the ability to impersonate the client, but you can also use Windows groups for authorization, or with a few extra lines of code wire up to AzMan to perform access checks on operations. On the other hand, if your client supplies a certificate, you don't have any of these options. You could certainly map the cert onto a "shadow" Windows account, but now you're forcing the administrator to provision Windows accounts for your users, which can significantly increase the cost of maintaining your application.

Going forward, it's clear that the future is in claims-based authorization, because it enables federated identity scenarios. As I discussed earlier in this paper, one of the major benefits of federated identity solutions is that there's no need to provision these shadow accounts for users. It's also the path forward for InfoCard. While there's not a lot of direct support for claims-based authorization today on the Windows platform (for example, there's not a version of AzMan that is claims-based), I expect to see much innovation from Microsoft in this area going forward.

Programming WCF Security

While the vast majority of security features in WCF can be controlled simply by flipping switches in config, at some point you're probably going to want to write code against WCF's identity model. One of the most common reasons to do this in a service is to discover the claims being presented by your client. The most basic way to read these claims is to get a ClaimSet from the ServiceSecurityContext of the request. You can then enumerate the claims about the subject. For example, you can use this technique to discover details such as the subject name on the client's certificate. Here's an example:

public string getSubjectName()
    // if sctx is null, user has not been authenticated
    ServiceSecurityContext sctx = ServiceSecurityContext.Current;
    if (null == sctx) throwAccessDenied();

    // we expect the user to supply a cert to authenticate
    X509CertificateClaimSet cs =
            as X509CertificateClaimSet;
    if (null == cs) throwAccessDenied();

    return cs.X509Certificate.SubjectName;

Taking a claims-based approach is best for applications that need to support identity federation, but for simpler applications you can often avoid going to this level by simply programming against IPrincipal and IIdentity. This makes sense, for example, if you expect your clients to supply Windows credentials.

Input Validation

While it might be tempting to walk away from this topic and say, "Just use WCF and your apps will be secure," there are still a lot of basic precautions you need to take to ensure that your services are secure enough. Threat modeling is a great way to prioritize your work so that you're securing the most risky areas of your application first. Input validation is also a must; any developer on your team can unknowingly introduce nasty security vulnerabilities such as SQL injection simply by writing naïve code. Input is evil until proven otherwise!

Where Are We?

WCF does much of the heavy lifting required to build identity-aware applications, including authentication, integrity, confidentiality, and auditing. WCF also makes it easy to add your own authorization logic by supplying the hooks you need to wire in AzMan, ASP.NET role-based security, or a claims-based solution that works with the identity metasystem. Keep in mind that going forward, claims-based authorization is a big piece of the puzzle in the future of connected, federated systems, so be thinking about it today!

Virtually all of the WCF bindings except the basic profile are secure out of the box, providing encryption and integrity checking of all messages, and authentication (typically using Windows credentials by default). Don't forget that you still need to follow basic secure coding guidelines, though.


Writing Secure Code, Second Edition

ACE threat modeling tool

Patterns & Practices threat modeling guidance

Patterns & Practices online input validation training modules

© 2014 Microsoft