MSDN Magazine > Issues and Downloads > 2009 > August 2009 Issue >  Cryptographic Agility
Security Briefs
Cryptographic Agility
Bryan Sullivan
Throughout history, people have used various forms of ciphers to conceal information from their adversaries. Julius Caesar used a three-place shift cipher (the letter A is converted to D, B is converted to E, and so on) to communicate battle plans. During World War II, the German navy used a significantly more advanced system—the Enigma machine—to encrypt messages sent to their U-boats. Today, we use even more sophisticated encryption mechanisms as part of the public key infrastructure that helps us perform secure transactions on the Internet.
But for as long as cryptographers have been making secret codes, cryptanalysts have been trying to break them and steal information, and sometimes the code breakers succeed. Cryptographic algorithms once considered secure are broken and rendered useless. Sometimes subtle flaws are found in the algorithms, and sometimes it is simply a matter of attackers having access to more computing power to perform brute-force attacks.
Recently, security researchers have demonstrated weaknesses in the MD5 hash algorithm as the result of collisions; that is, they have shown that two messages can have the same computed MD5 hash value. They have created a proof-of-concept attack against this weakness targeted at the public key infrastructures that protect e-commerce transactions on the Web. By purchasing a specially crafted Web site certificate from a certificate authority (CA) that uses MD5 to sign its certificates, the researchers were able to create a rogue CA certificate that could effectively be used to impersonate potentially any site on the Internet. They concluded that MD5 is not appropriate for signing digital certificates and that a stronger alternative, such as one of the SHA-2 algorithms, should be used. (If you're interested in learning more about this research, you can read the white paper.)
These findings are certainly cause for concern, but they are not a huge surprise. Theoretical MD5 weaknesses have been demonstrated for years, and the use of MD5 in Microsoft products has been banned by the Microsoft SDL cryptographic standards since 2005. Other once-popular algorithms, such as SHA-1 and RC2, have been similarly banned. Figure 1 shows a complete list of the cryptographic algorithms banned or approved by the SDL. The list of SDL-approved algorithms is current as of this writing, but this list is reviewed and updated annually as part of the SDL update process.
Figure 1 SDL-Approved Cryptographic Algorithms
Algorithm Type Banned (algorithms to be replaced in existing code or used only for decryption) Acceptable (algorithms acceptable for existing code, except sensitive data) Recommended (algorithms for new code)
Symmetric Block DES, DESX, RC2, SKIPJACK 3DES (2 or 3 key) AES (>=128 bit)
Symmetric Stream SEAL, CYLINK_MEK, RC4 (<128bit) RC4 (>= 128bit) None, block cipher is preferred
Asymmetric RSA (<2048 bit), Diffie-Hellman (<2048 bit) RSA (>=2048bit ), Diffie-Hellman (>=2048bit) RSA (>=2048bit), Diffie-Hellman (>=2048bit), ECC (>=256bit)
Hash (includes HMAC usage) SHA-0 (SHA), SHA-1, MD2, MD4, MD5 SHA-2 SHA-2 (includes: SHA-256, SHA-384, SHA-512)
HMAC Key Lengths <112bit >= 112bit >= 128bit
Even if you follow these standards in your own code, using only the most secure algorithms and the longest key lengths, there's no guarantee that the code you write today will remain secure. In fact, it will probably not remain secure if history is any guide.

Planning for Future Exploits
You can address this unpleasant scenario reactively by going through your old applications' code bases, picking out instantiations of vulnerable algorithms, replacing them with new algorithms, rebuilding the applications, running them through regression tests, and then issuing patches or service packs to your users. This is not only a lot of work for you, but it still leaves your users at risk until you can get the fixes shipped.
A better alternative is to plan for this scenario from the beginning. Rather than hard-coding specific cryptographic algorithms into your code, use one of the crypto-agility features built into the Microsoft .NET Framework. Let's take a look at a few C# code snippets, starting with the least agile example:
private static byte[] computeHash(byte[] buffer)
{
   using (MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider())
   {
      return md5.ComputeHash(buffer);
   }
}
This code is completely nonagile. It is tied to a specific algorithm (MD5) as well as a specific implementation of that algorithm, the MD5CryptoServiceProvider class. Modifying this application to use a secure hashing algorithm would require changing code and issuing a patch. Here's a little better example:
private static byte[] computeHash(byte[] buffer)
{
   string md5Impl = ConfigurationManager.AppSettings["md5Impl"];
   if (md5Impl == null)
      md5Impl = String.Empty;

   using (MD5 md5 = MD5.Create(md5Impl))
   {
      return md5.ComputeHash(buffer);
   }
}
This function uses the System.Configuration.Configuration Manager class to retrieve a custom app setting (the "md5Impl" setting) from the application's configuration file. In this case, the setting is used to store the strong name of the algorithm implementation class you want to use. The code passes the retrieved value of this setting to the static function MD5.Create to create an instance of the desired class. (System.Security.Cryptography.MD5 is an abstract base class from which all implementations of the MD5 algorithm must derive.) For example, if the application setting for md5Impl was set to the string "System.Security.Cryptography.MD5Cng, System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089", MD5.Create would create an instance of the MD5Cng class.
This approach solves only half of our crypto-agility problem, so it really is no solution at all. We can now specify an implementation of the MD5 algorithm without having to change any source code, which might prove useful if a flaw is discovered in a specific implementation, like MD5Cng, but we're still tied to the use of MD5 in general. To solve this problem, we keep abstracting upward:
private static byte[] computeHash(byte[] buffer)
{
   using (HashAlgorithm hash = HashAlgorithm.Create("MD5"))
   {
      return hash.ComputeHash(buffer);
   }
}
At first glance, this code snippet does not look substantially different from the first example. It looks like we've once again hard-coded the MD5 algorithm into the application via the call to HashAlgorithm.Create("MD5"). Surprisingly though, this code is substantially more cryptographically agile than both of the other examples. While the default behavior of the method call HashAlgorithm.Create("MD5")—as of .NET 3.5—is to create an instance of the MD5CryptoServiceProvider class, the runtime behavior can be customized by making a change to the machine.config file.
Let's change the behavior of this code to create an instance of the SHA512algorithm instead of MD5. To do this, we need to add two elements to the machine.config file: a <cryptoClass> element to map a friendly algorithm name to the algorithm implementation class we want; and a <nameEntry> element to map the old, deprecated algorithm's friendly name to the new friendly name.
<configuration>
  <mscorlib>
    <cryptographySettings>
      <cryptoNameMapping>
        <cryptoClasses>
          <cryptoClass MyPreferredHash="SHA512CryptoServiceProvider, System.Core, Version=3.5.0.0, 
          	Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
        </cryptoClasses>
        <nameEntry name="MD5" class="MyPreferredHash"/>
      </cryptoNameMapping>
    </cryptographySettings>
  </mscorlib>
</configuration>
Now, when our code makes its call to HashAlgorithm.Create("MD5"), the CLR looks in the machine.config file and sees that the string "MD5" should map to the friendly algorithm name "MyPreferredHash". It then sees that "MyPreferredHash" maps to the class SHA512CryptoServiceProvider (as defined in the assembly System.Core, with the specified version, culture, and public key token) and creates an instance of that class.
It's important to note that the algorithm remapping takes place not at compile time but at run time: it's the user's machine.config that controls the remapping, not the developer's. As a result, this technique solves our dilemma of being tied to a particular algorithm that might be broken at some time in the future. By avoiding hard-coding the cryptographic algorithm class into the application—coding only the abstract type of cryptographic algorithm, HashAlgorithm, instead—we create an application in which the end user (more specifically, someone with administrative rights to edit the machine.config file on the machine where the application is installed) can determine exactly which algorithm and implementation the application will use. An administrator might choose to replace an algorithm that was recently broken with one still considered secure (for example, replace MD5 with SHA-256) or to proactively replace a secure algorithm with an alternative with a longer bit length (replace SHA-256 with SHA-512).

Potential Problems
Modifying the machine.config file to remap the default algorithm-type strings (like "MD5" and "SHA1") might solve crypto-agility problems, but it can create compatibility problems at the same time. Making changes to machine.config affects every .NET application on the machine. Other applications installed on the machine might rely on MD5 specifically, and changing the algorithms used by these applications might break them in unexpected ways that are difficult to diagnose. As an alternative to forcing blanket changes to the entire machine, it's better to use custom, application-specific friendly names in your code and map those name entries to preferred classes in the machine.config. For example, we can change "MD5" to "MyApplicationHash" in our example:
private static byte[] computeHash(byte[] buffer)
{
   using (HashAlgorithm hash = HashAlgorithm.Create("MyApplicationHash"))
   {
      return hash.ComputeHash(buffer);
   }
}
We then add an entry to the machine.config file to map "MyApplicationHash" to the "MyPreferredHash" class:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512CryptoServiceProvider, 
     System.Core, Version=3.5.0.0, Culture=neutral, 
     PublicKeyToken=b77a5c561934e089"/>
</cryptoClasses>
<nameEntry name="MyApplicationHash" class="MyPreferredHash"/>
You can also map multiple friendly names to the same class; for example, you could have one friendly name for each of your applications, and in this way change the behavior of specific applications without affecting every other application on the machine:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512CryptoServiceProvider, 
     System.Core, Version=3.5.0.0, Culture=neutral, 
     PublicKeyToken=b77a5c561934e089"/>
</cryptoClasses>
<nameEntry name="MyApplicationHash" class="MyPreferredHash"/>
<nameEntry name="MyApplication2Hash" class="MyPreferredHash"/>
<nameEntry name="MyApplication3Hash" class="MyPreferredHash"/>

However, we're still not out of the woods with regard to compatibility problems in our own applications. You need to plan ahead regarding storage size, for both local variables (transient storage) and database and XML schemas (persistent storage). For example, MD5 hashes are always 128 bits in length. If you budget exactly 128 bits in your code or schema to store hash output, you will not be able to upgrade to SHA-256 (256 bit-length output) or SHA-512 (512 bit-length output).
This does beg the question of how much storage is enough. Is 512 bits enough, or should you use 1,024, 2,048, or more? I can't provide a hard rule here because every application has different requirements, but as a rule of thumb I recommend that you budget twice as much space for hashes as you currently use. For symmetric- and asymmetric-encrypted data, you might reserve an extra 10 percent of space at most. It's unlikely that new algorithms with output sizes significantly larger than existing algorithms will be widely accepted.
However, applications that store hash values or encrypted data in a persistent state (for example, in a database or file) have bigger problems than reserving enough space. If you persist data using one algorithm and then try to operate on that data later using a different algorithm, you will not get the results you expect. For example, it's a good idea to store hashes of passwords rather than the full plaintext versions. When the user tries to log on, the code can compare the hash of the password supplied by the user to the stored hash in the database. If the hashes match, the user is authentic. However, if a hash is stored in one format (say, MD5) and an application is upgraded to use another algorithm (say, SHA-256), users will never be able to log on because the SHA-256 hash value of the passwords will always be different from the MD5 hash value of those same passwords.
You can get around this issue in some cases by storing the original algorithm as metadata along with the actual data. Then, when operating on stored data, use the agility methods (or reflection) to instantiate the algorithm originally used instead of the current algorithm:
private static bool checkPassword(string password, byte[] storedHash,
   string storedHashAlgorithm)
{
   using (HashAlgorithm hash = HashAlgorithm.Create(storedHashAlgorithm))
   {
      byte[] newHash = 
         hash.ComputeHash(System.Text.Encoding.Default.GetBytes(password));
      if (newHash.Length != storedHash.Length)
         return false;
      for (int i = 0; i < newHash.Length; i++)
         if (newHash[i] != storedHash[i])
            return false;
      return true;
   }
}
Unfortunately, if you ever need to compare two stored hashes, they have to have been created using the same algorithm. There is simply no way to compare an MD5 hash to a SHA-256 hash and determine if they were both created from the same original data. There is no good crypto-agility solution for this problem, and the best advice I can offer is that you should choose the most secure algorithm currently available and then develop an upgrade plan in case that algorithm is broken later. In general, crypto agility tends to work much better for transient data than for persistent data.

Alternative Usage and Syntax
Assuming that your application design allows the use of crypto agility, let's continue to look at some alternative uses and syntaxes for this technique. We've focused almost entirely on cryptographic hashing algorithms to this point in the article, but crypto agility also works for other cryptographic algorithm types. Just call the static Create method of the appropriate abstract base class: SymmetricAlgorithm for symmetric (secret-key) cryptography algorithms such as AES; AsymmetricAlgorithm for asymmetric (public key) cryptography algorithms such as RSA; KeyedHashAlgorithm for keyed hashes; and HMAC for hash-based message authentication codes.
You can also use crypto agility to replace one of the standard .NET cryptographic algorithm classes with a custom algorithm class, such as one of the algorithms developed by the CLR security team and uploaded to CodePlex. However, writing your own custom crypto libraries is highly discouraged. Your homemade algorithm consisting of an ROT13 followed by a bitwise left shift and an XOR against your cat's name might seem secure, but it will pose little challenge to an expert code breaker. Unless you are an expert in cryptography, leave algorithm design to the professionals.
Also resist the temptation to develop your own algorithms—or to revive long-dead, obscure ones, like the Vigenère cipher—even in situations where you don't need cryptographically strong protection. The issue isn't so much what you do with your cipher, but what developers who come after you will do with it. A new developer who finds your custom algorithm class in the code base years later might decide that it's just what he needs for the new product activation key generation logic.
So far we've seen one of the syntaxes for implementing cryptographically agile code, AlgorithmType.Create(algorithmName), but two other approaches are built into the .NET Framework. The first is to use the System.Security.Cryptography.CryptoConfig class:
private static byte[] computeHash(byte[] buffer)
{
   using (HashAlgorithm hash = (HashAlgorithm)CryptoConfig.CreateFromName("MyApplicationHash"))
   {
      return hash.ComputeHash(buffer);
   }
}
This code performs the same operations as our previous example using HashAlgorithm.Create("MyApplicationHash"): the CLR looks in the machine.config file for a remapping of the string "MyApplicationHash" and uses the remapped algorithm class if it finds one. Notice that we have to cast the result of CryptoConfig.CreateFromName because it has a return type of System.Object and can be used to create SymmetricAlgorithms, AsymmetricAlgorithms, or any other kind of object.
The second alternative syntax is to call the static algorithm Create method in our original example but with no parameters, like this:
private static byte[] computeHash(byte[] buffer)
{
   using (HashAlgorithm hash = HashAlgorithm.Create())
   {
      return hash.ComputeHash(buffer);
   }
}
In this code, we simply ask the framework to provide an instance of whatever the default hash algorithm implementation is. You can find the list of defaults for each of the System.Security.Cryptography abstract base classes (as of .NET 3.5) in Figure 2.
Figure 2 Default Algorithms and Implementations in the .NET Framework 3.5
Abstract Base Class Default Algorithm Default Implementation
HashAlgorithm SHA-1 SHA1CryptoServiceProvider
SymmetricAlgorithm AES (Rijndael) RijndaelManaged
AsymmetricAlgorithm RSA RSACryptoServiceProvider
KeyedHashAlgorithm SHA-1 HMACSHA1
HMAC SHA-1 HMACSHA1
For HashAlgorithm, you can see that the default algorithm is SHA-1 and the default implementation class is SHA1CryptoServiceProvider. However, we know that SHA-1 is banned by the SDL cryptographic standards. For the moment, let's ignore the fact that potential compatibility problems make it generally unwise to remap inherent algorithm names like "SHA1" and alter our machine.config to remap "SHA1" to SHA512CryptoServiceProvider:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512CryptoServiceProvider, System.Core, Version=3.5.0.0, Culture=neutral, 
     PublicKeyToken=b77a5c561934e089"/>
</cryptoClasses>
<nameEntry name="SHA1" class="MyPreferredHash"/>
Now let's insert a debug line in the computeHash function to confirm that the algorithm was remapped correctly and then run the application:
private static byte[] computeHash(byte[] buffer)
{
   using (HashAlgorithm hash = HashAlgorithm.Create())
   {
      Debug.WriteLine(hash.GetType());
      return hash.ComputeHash(buffer);
   }
}
The debug output from this method is:
System.Security.Cryptography.SHA1CryptoServiceProvider
What happened? Didn't we remap SHA1 to SHA-512? Actually, no, we didn't. We remapped only the string "SHA1" to the class SHA512CryptoServiceProvider, and we did not pass the string "SHA1" as a parameter to the call to HashAlgorithm.Create.
Even though Create appears to have no string parameters to remap, it is still possible to change the type of object that is created. You can do this because HashAlgorithm.Create() is just shortcut syntax for HashAlgorithm.Create("System.Security.Cryptography.HashAlgorithm"). Now let's add another line to the machine.config file to remap "System.Security.Cryptography.HashAlgorithm" to SHA512CryptoServiceProvider and then run the app again:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512CryptoServiceProvider, System.Core, Version=3.5.0.0, 
     Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
</cryptoClasses>
<nameEntry name="SHA1" class="MyPreferredHash"/>
<nameEntry name="System.Security.Cryptography.HashAlgorithm" class="MyPreferredHash"/>
The debug output from computeHash is now exactly what we expected:
System.Security.Cryptography.SHA512CryptoServiceProvider
However, remember that remapping classes in this way can create unexpected and difficult-to-debug compatibility issues. It's preferable to use application-specific friendly names that can be remapped with less chance of causing problems.

Another Benefit of Crypto Agility
In addition to letting you replace broken algorithms on the fly without having to recompile, crypto agility can be used to improve performance. If you've ever looked at the System.Security.Cryptography namespace, you might have noticed that often several different implementation classes exist for a given algorithm. For example, there are three different implementations of SHA-512: SHA512Cng, SHA512CryptoServiceProvider, and SHA512Managed.
Of these classes, SHA512Cng usually offers the best performance. A quick test on my laptop (running Windows 7 release candidate) shows that the –Cng classes in general are about 10 percent faster than the -CryptoServiceProvider and -Managed classes. My colleagues in the Core Architecture group inform me that in some circumstances the –Cng classes can actually run 10 times faster than the others!
Clearly, using the –Cng classes is preferable, and we could set up our machine.config file to remap algorithm implementations to use those classes, but the -Cng classes are not available on every operating system. Only Windows Vista, Windows Server 2008, and Windows 7 (and later versions, presumably) support –Cng. Trying to instantiate a –Cng class on any other operating system will throw an exception.
Similarly, the –Managed family of crypto classes (AesManaged, RijndaelManaged, SHA256Managed, and so on) are not always available, but for a completely different reason. The Federal Information Processing Standard 140 (FIPS) specifies standards for cryptographic algorithms and implementations. As of this writing, both the –Cng and –CryptoServiceProvider implementation classes are FIPS-certified, but –Managed classes are not. Furthermore, you can configure a Group Policy setting that allows only FIPS-compliant algorithms to be used. Some U.S. and Canadian government agencies mandate this policy setting. If you'd like to check your machine, open the Local Group Policy Editor (gpedit.msc), navigate to the Computer Configuration/Windows Settings/Security Settings/Local Policies/Security Options node, and check the value of the setting "System Cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing". If this policy is set to Enabled, attempting to instantiate a –Managed class on that machine will throw an exception.
This leaves the –CryptoServiceProvider family of classes as the lowest common denominator guaranteed to work on all platforms, but these classes also generally have the worst performance. You can overcome this problem by implementing one of the three crypto-agility syntaxes mentioned earlier in this article and customizing the machine.config file remapping for deployed machines based on their operating system and settings. For machines running Windows Vista or later, we can remap the machine.config to prefer the –Cng implementation classes:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512Cng, System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
</cryptoClasses>
<nameEntry name="MyApplicationHash" class="MyPreferredHash"/>
For machines running operating systems earlier than Windows Vista with FIPS compliance disabled, we can remap machine.config to prefer the –Managed classes:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512Managed"/>
</cryptoClasses>
<nameEntry name="MyApplicationHash" class="MyPreferredHash"/>
For all other machines, we remap to the –CryptoServiceProvider classes:
<cryptoClasses>
   <cryptoClass MyPreferredHash="SHA512CryptoServiceProvider, System.Core, Version=3.5.0.0, 
     Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
</cryptoClasses>
<nameEntry name="MyApplicationHash" class="MyPreferredHash"/>
Any call to HashAlgorithm.Create("MyApplicationHash") now creates the highest-performing implementation class available for that machine. Furthermore, since the algorithms are identical, you don't need to worry about compatibility or interoperability issues. A hash created for a given input value on one machine will be the same as a hash created for that same input value on another machine, even if the implementation classes are different. This holds true for the other algorithm types as well: you can encrypt an input on one machine by using AesManaged and decrypt it successfully on a different machine by using AesCryptoServiceProvider.

Wrapping Up
Given the time and expense of recoding your application in response to a broken cryptographic algorithm, not to mention the danger to your users until you can get a new version deployed, it is wise to plan for this occurrence and write your application in a cryptographically agile fashion. The fact that you can also obtain a performance benefit from coding this way is icing on the cake.
Never hardcode specific algorithms or implementations of those algorithms into your application. Always declare cryptographic algorithms as one of the following abstract algorithm type classes: HashAlgorithm, SymmetricAlgorithm, AsymmetricAlgorithm, KeyedHashAlgorithm, or HMAC.
I believe that an FxCop rule that would verify cryptographic agility would be extremely useful. If someone writes such a rule and posts it to Codeplex or another public code repository, I will be more than happy to give them full credit in this space and on the SDL blog.
Finally, I would like to acknowledge Shawn Hernan from the SQL Server security team and Shawn Farkas from the CLR security team for their expert feedback and help in producing this article.
Send your questions and comments to briefs@microsoft.com.

Bryan Sullivan is a security program manager for the Microsoft Security Development Lifecycle team, specializing in Web application security issues. He is the author of Ajax Security (Addison-Wesley, 2007).

Page view tracker