Securing the Username Token with WSE 2.0
Web Services Enhancements 2.0 for Microsoft .NET (WSE 2.0)
Summary: Get guidance on dealing with the Username token: examine two classes of attacks against a typical system that employs Username tokens, and discover mitigating techniques to bolster your system against these attacks. (10 printed pages)
Download the associated sample code, WSE UsernameToken Sample.msi.
Attacks Against Encrypted or Signed Data
Attacks Against the Server Account Database
Using a "Password Equivalent"
The Five Step Recommended Solution for Using Username Tokens
A Non-Solution: Sending Username Tokens in the Clear
Web service developers have trouble dealing with humans, and I'm not just talking about getting dates. If you issue a certificate to a human, he'll repave his machine the next day and lose his private key. If you give him a smartcard, he'll leave it at home. If you ask him to remember a password, he'll choose a trivial one. If you ask him to remember a better password, he'll write it down on a sticky note on his monitor. There's just no way to win. But so often we must bite the bullet and build systems that cater to this imperfect species.
When humans are involved, you'll often see the Username token used to authenticate with a server. When used with care, a Username token can become part of a system that delivers real security. But you really need to plan ahead. In this article, I will provide some guidance on dealing with the Username token in a rational way.
I'll start by discussing the two classes of attacks against a typical system that employs Username tokens, and then follow up by describing some mitigating techniques to bolster your system against these attacks.
An attacker who sees any data that has been encrypted or signed with a key can mount a brute force attack by simply trying all possible keys to decrypt the data or replicate the signature. Note that this is an offline attack, so standard server-side techniques such as account lockout or login delays after a certain number of failed attempts won't save you. The attacker can focus all of his computing resources in an attempt to guess the key, and because it's happening offline, you won't even be aware that your system is under attack. For long session keys (these are typically 128-256 bit keys generated by a strong cryptographic random number generator seeded with high quality entropy), this type of attack is infeasible because the keyspace is so vast. For example, SSL uses a strong session key and is not vulnerable to these types of attacks.
But when a password is the source of the encryption or signature key, this type of attack becomes very efficient, because the keyspace is much smaller. As an extreme example, if your password policy allows one-character ASCII passwords, an attacker only has to guess about 100 different keys before he happens on the right one.
If your password policy requires at least six characters, the keyspace grows, but the attacker will now use a dictionary of commonly used passwords (or simply a list of all words in the primary language the server supports) to make his attack more efficient. Eventually he'll find someone who constructed his password based on one or more entries in the dictionary.
If you enforce a password policy with standard complexity requirements, and even go so far as to perform your own dictionary attack against all new passwords registered with the system, you can reduce this particular risk, but then password length becomes an issue. People simply cannot remember long passwords.
At this point, you may be asking yourself what any of this has to do with Username tokens. Consider that a Username token is often used to sign messages, and with a little work, can even be used to encrypt messages. If you do this, and you allow an attacker to see these signatures or ciphertext, you'll be opening yourself up to the dreaded offline brute force and dictionary attacks I describe above.
The Web Services Enhancements (WSE) team is so concerned about the misuse of Username tokens that as of SP2, the WSE 2.0 token-issuing framework will reject any request that contains an unencrypted Username token (one acceptable form of encryption is simply to use SSL). And there will be no configuration option to change this behavior. If you really want to relax this restriction, you'll need to write code to do it.
The bottom line is this: keys based on passwords will always be weak. Given the threat of an offline dictionary or brute force attack, it's a really bad idea to expose ciphertext or signatures based on a password. This includes the digest produced by PasswordOption.SendHashed in WSE!
If an attacker manages to compromise a server that holds a password database, and if those passwords are stored in their raw, cleartext form, the attacker will immediately be in possession of extremely valuable material. Humans tend to reuse passwords, either directly or by making slight modifications on a theme. Often by looting the account database of one site, an attacker gains immediate access to many other sites, exposing users to identity theft or worse.
Responsible architects do not design systems that store cleartext passwords, or even encrypted passwords. They know that encryption doesn't eliminate secrets; it just moves them around. A server that needs to decrypt a password needs a key to do that, and if the key is available to the server, it's also available to the attacker who has compromised that server.
A responsible architect will build a system that stores password derivatives instead. By deriving a password "validator" via a one-way hash, for example, the server can verify a password without having to actually store the original cleartext password. And because the operation is one-way, no key is required.
But make no mistake: an account database protected by the best one-way salted, iterated hash is no match for an attacker who has stolen the database. This attacker can take all the time she needs, and apply all the computing power at her disposal to mount dictionary and brute force attacks against those hashes. At this point the combination of a strong password policy and good detection countermeasures are the only thing that can limit the damage. Your password policy will slow down the attack, and your detection countermeasures will alert you to the compromise, at which point you can start notifying your customers. Whether you'll have any customers left after such a breach is another question entirely, as changing a password that is used at many different sites ranges from inconvenient to impossible for most users.
A strong password policy has another benefit: it may force someone who uses the same password everywhere to pick a unique password for your site, since you'll simply reject his standard password in the first place!
Clearly a strong password policy is a good idea. But what makes one better than another? How long of a password should you require? As with most security countermeasures, the strength of the defense must be proportional to the value of the assets you're defending. At one extreme, if you have no valuable assets (and your reputation is one of those assets, by the way), you don't need a password policy at all. You have nothing to lose, so there's no point in inconveniencing your users by rejecting the first password they choose. In fact, why even bother asking for a password in the first place? At the other extreme, passwords might not even be an option: you might instead prefer a public key infrastructure with smart cards for your clients so you get multi-factor authentication.
If you're somewhere in between, you can use a formula to calculate the number of steps required to brute force a password:
steps = (charChoices ^ minLength) / 2
where charChoices is the number of unique characters typically chosen for a password.
Of course this formula doesn't take into account knowledge of the person who picks the password, or the potential for dictionary attacks, so the number of steps will often be less.
Let's say you require a six-character password that uses a combination of upper and lower case letters (that's 26 * 2 possibilities for each character), a brute force attack will cost about 234 steps. The kid next door could brute-force this on his GameBoy. How about an 8-character password that additionally requires numbers and punctuation, which adds maybe 25 permutations for a typical user? This raises the bar to about 250 steps, and you're looking at maybe a year on a typical PC today, but using specialized hardware might drop that down to more like a week, or even milliseconds, depending on the level of funding.
A strong password policy inconveniences your customers. On the other hand, the weaker your password policy, the more vulnerable your password database becomes, and to reduce that risk, you need to apply stronger countermeasures to protect and detect the compromise of that database.
I'll start with an overview of countermeasures that can help mitigate the risks that come along with the Username token. Then I'll blend them together in a couple of examples to give you some guidance on how to apply them in your own systems.
The best way to protect a Username token is to encrypt it using a strong key before it goes out on the wire. For example, if you're using SSL to secure a simple point-to-point connection between client and server, you could use PasswordOption.SendPlainText on the Username token and let the secured channel protect the password. From a security standpoint, this is equivalent to using basic authentication over SSL. The only problem now is that once the HTTP listener plumbing on the server side decrypts the payload, the user's plaintext password is available to all code in the server-side pipeline, which may not be acceptable.
You can further protect the user's password by having the client hash the password before sending it over the SSL channel. This would prevent the server-side pipeline from being constantly exposed to plaintext user passwords (remember, most people use the same password everywhere, so it's not just your server we're worried about here). Client-side hashing and the more general notion of password equivalents are important topics that you'll see throughout this article.
By the way, SSL isn't the only way of encrypting Username tokens. If you're using WS-Security (and perhaps WS-SecureConversation) to secure your messages, you can use the public key from the server's X.509 certificate to encrypt a strong, random, symmetric key with which you can encrypt the token.
You'll also need to encrypt any signatures that were created with a Username token. Remember, we don't want any password-based ciphertext or signatures to be seen by an attacker. If you can use SSL, do it. SSL has been through a lot of public scrutiny and cryptanalysis, and you won't need to worry about what's being encrypted and what's not: the entire message will be encrypted.
At some point, the server needs to verify the client's password, and therefore will need some sort of account database. As I mentioned earlier, this account database is a valuable asset that needs protection. One technique you can use to protect a database of passwords is to store a salted, iterated hash in place of the real password. The goal here is to slow down an offline brute force or dictionary attack against a stolen account database, giving you time to detect and react to the breach by notifying your customers.
Another technique you might consider is the use of one-time passwords (OTP), which you can read more about in RFC 2289. OTP is beyond the scope of this article, however.
Here's a brief excerpt from the OASIS UsernameToken Profile 1.0:
"Passwords of type wsse:PasswordText and wsse:PasswordDigest are not limited to actual passwords, although this is a common case. Any password equivalent such as a derived password or S/KEY (one time password) can be used...It is not the intention of this specification to require that all implementations have access to cleartext passwords."
You see, the client is allowed to pre-process simple passwords before sending them to the server via a Username token. One approach is to simply hash the password before sending it to the server over an already secured channel (SSL, for example). In this case, the server authenticates the client by asking her to prove that she knows the password hash, which implies that she also knows the password.
Another approach is to salt and hash the password before sending it to the server, creating a one-time hash that the server can use to verify knowledge of the password. This is occasionally done without any other channel security, such as SSL. This rather complex technique is still open to offline dictionary and brute force attacks, and therefore will not be recommended, but will be discussed for those who insist on going down that treacherous path.
STEP 1: Authenticate the server using X.509 certificates. First of all, the server must be authenticated to the client using a strong authentication mechanism. You don't want your customers sending their passwords to a spoofed server! Use SSL or WS-Security with a server-side X.509 certificate. The SSL handshake requires the server to prove knowledge of his private key. You can use WS-Security to establish the same proof if SSL isn't an option.
STEP 2: Encrypt the Username token. You must ensure that all Username tokens are encrypted using a strong key derived from the authentication exchange mentioned above. This will help ensure that only the authentic server can decrypt the token. If you're using SSL, the entire payload will be encrypted, which satisfies this requirement. If you are using WS-Security with a server certificate, you should ensure that all Username tokens (and any signatures generated by those tokens) are encrypted with a strong session key that is itself encrypted with the server's public key.
STEP 3: Avoid exposing signatures or ciphertext generated with a Username token. Don't ever encrypt any data with the Username token. Prefer instead to rely on a strong session key established during the server authentication handshake. Either SSL or WS-Security with a server certificate makes this possible. If you sign anything with the Username token, ensure that the signature itself is encrypted with that same strong session key. Remember our goal that no ciphertext or signatures based on passwords be visible to an attacker because it is then vulnerable to offline dictionary attacks.
STEP 4: Hash the Username token password equivalent. Use a password equivalent in the Username token as opposed to a simple cleartext password. If you're not using a one-time password or similar scheme, use the simple SHA-1 hash described for wsse:PasswordDigest. This will reduce the exposure of the client's password as the Username token flows through the server-side pipeline.
STEP 5: Protect your account database with a unique salt for each entry. Don't store clear text or reversibly encrypted passwords in your server-side account database. Instead, store a salted, iteratively hashed password verifier. The input to the verification process will be the hashed password sent by the client and the salt looked up from the account database. If you're wondering what the heck I'm talking about, bear with me, as I will later describe it in detail.
This is the guidance I recommend you follow. Now let's see how complicated things get when you try to send Username tokens without strong encryption.
Let me state up front that this is not a recommended approach, but enough people will want to do this that it warrants discussion. Hopefully the discussion alone will convince you to reconsider the simpler, recommended approach above.
OASIS describes three ways of sending a password via a Username token. The first is to send the password (or password equivalent) in the raw. The second is to send a hash of the password instead. The third is to send a one-time hash that includes a nonce and timestamp.
If you're not going to encrypt the Username token, the first two techniques are subject to replay attacks (sending raw passwords in the clear is utter lunacy anyway). The third technique (when used in conjunction with a server-side replay cache) can mitigate this problem, but is still subject to offline dictionary and brute force attacks, which is why I cannot recommend it. It's unfortunate that OASIS is not more careful to point this out in their discussion of the one-time hash.
Another problem with the one-time hash approach is that it encourages server-side plumbing that stores cleartext passwords on the server. Consider how the server must react when sent a hash of a nonce, timestamp, and cleartext password. He must look up the user's clear text password, and combine it with the nonce and timestamp sent in the message in order to calculate the hash value.
As mentioned early in this article, storing cleartext passwords in server-side account databases is a very dangerous practice. If you're going to use the one-time hash approach to send unencrypted Username tokens (even though you've been warned of the dangers), you should at least find an approach that strengthens the server-side account database.
The first approach you might consider here is to opt for using a password equivalent, perhaps by having the client plumbing hash the password before running the one-time hash algorithm that produces the Username token. In this case, the shared secret is SHA-1(password), and the server-side account database can now store SHA-1(password) instead of the clear text password.
This is a step in the right direction, but an attacker who steals the server-side account database can mount a scalable brute force attack by hashing each possible password and comparing against all the hashes in the account database. If your password policy isn't very strong, he can use a dictionary to make the attack even more efficient. If there are hundreds or thousands of accounts, his attack scales very nicely, because he is attacking all the passwords at once!
A more insidious problem is that if several sites use the same technique, the attacker doesn't even need to guess the original password, since he already has the password equivalent, SHA-1(password). He can immediately create Username tokens for all of these users that will be valid at any other site that uses this technique.
We can make progress on both fronts by modifying our approach to creating the password equivalent. Instead of using SHA-1(password), we use SHAd-1(password + userName + scopeUri) as the password equivalent.
userName and scopeUri are salt values. By including the user name as a salt, we limit the scalability of an offline attack by forcing the enemy to attack one password at a time since each account has a different salt.
scopeUri is simply a string that is unique to your password database. This ensures that the password equivalents you are using are different from those that another site uses. Neither of these salts is secret. An attacker gains no additional benefit by discovering their values.
You must publish the scopeUri to anyone who wants to use your service. You must also document a canonical form for user names, or silly things like case insensitivity will cause some users to fail authentication.
The last point is that we're not using SHA-1 anymore, but rather SHAd-1. This is a double hash:
SHAd-1(x) = SHA-1(SHA-1(x))
By hashing the data twice, we further reduce the scalability of an offline attack by ensuring that the internal state of the hash cannot be partially pre-computed based on a common salt such as the scopeUri. The ideas behind double hashing are described in detail in Practical Cryptography, by Niels Ferguson and Bruce Schneier.
To create the Username token, base64-encode the password equivalent calculated by the client and follow the OASIS procedure for forming the one-time hash:
Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )
When the server receives the Username token, he checks his replay cache, then looks up the password equivalent in the account database, and calculates the one-time hash using the nonce and timestamp sent by the client. If his calculated hash matches the digest sent by the client, he assumes the client is authentic, and adds the timestamp/nonce pair to his replay cache for later use.
If this seems complicated, well, it is! And it's still subject to offline brute force and dictionary attacks unless you run over SSL! Much better is the simpler, recommended approach I described earlier. Go have another look and try to convince yourself to reconsider it.
Implementing a salted, iteratively hashed account database
If you're sending one-time hashed passwords, this technique won't be an option for you, since you need to look up the password equivalent in order to calculate the one-time hash. But if you're using the recommended approach, encrypting the Username token on the client and decrypting it on the server, the server will have possession of the password equivalent after the token is decrypted. You can then convert it into a verifier by salting it and iteratively hashing it. This can strengthen your account database significantly.
A verifier is simply a hash value that requires a non-trivial amount of work to calculate, when given the password equivalent. This is referred to as "salting and stretching" the password. We mix in a unique salt value and iteratively hash to reduce the scalability of offline attacks against a stolen account database.
In the .NET Framework version 1.0, there is a class that implements this mechanism. It is called PasswordDeriveBytes. If you're using version 2.0 of the Framework, you should prefer Rfc2898DeriveBytes, which complies with the PKCS#5 standard for password-based cryptography. In either case, you'll construct this object by passing in the password, salt, and number of iterations you want it to use. You'll then call GetBytes() to get a byte array that represents the verifier. This is what will be stored in the account database, as opposed to a cleartext password.
When new clients establish accounts, you'll take their cleartext password (or password equivalent), generate a random salt value, and calculate the verifier. You'll store both the salt and the verifier in your account database. When existing clients send you a Username token, you'll pull out the password equivalent, look up the salt value and verifier from the account database, use the salt and presented password equivalent to calculate the caller's verifier, then compare it with the verifier from the account database.
I've included some sample code that shows how to implement this scheme using PasswordDeriveBytes. It should help you get started.
One last thing: if you follow this technique (or even if you simply store the SHA-1 hash of the password) and the user forgets her password, you will not be able to e-mail it to her. This is a good thing! E-mailing passwords to users ranks just behind storing cleartext passwords on a server as one of the silliest things you can do. What you need is an alternate way of authenticating the user, so you can determine the authenticity of the gal on the phone asking you to reset her password. Asking the user a set of questions up front that can be used to authenticate her later on is a typical solution. Practical Cryptography contains a great description of this technique. Whatever strategy you choose, you need to plan for this ahead of time!
Username tokens are really easy to misuse. I strongly recommend encrypting them and any signatures created by them with a strong key established with a strong server authentication scheme, such as SSL. I also spent some time exploring some ideas for those who will not follow this advice and send Username tokens without encrypting them.
Passwords are a necessary evil, but by understanding the threats, and using the right techniques to mitigate those threats, you can still build secure systems around them. Just remember that no matter what you do, if your password policy isn't up to par, you're playing a very dangerous game.
Practical Cryptography, Niels Ferguson and Bruce Schneier, Wiley 2003
Building Secure Software: How to Avoid Security Problems the Right Way, John Viega and Gary McGraw, Addison-Wesley, 2001