Security Briefs - Web Application Configuration Security Revisited
By Bryan Sullivan | November 2010
A few years ago—prior to my time at Microsoft and the Security Development Lifecycle (SDL) team—I wrote an article on the dangers of insecure web.config settings and named the top 10 worst offenders. You can still find this article today—just search for “Top 10 Application Security Vulnerabilities in Web.Config Files” in your favorite search engine. The configuration vulnerabilities I talked about back then are still relevant and serious today, although they probably wouldn’t come as huge surprises to regular readers of MSDN Magazine. It’s still important to enable custom errors; it’s still important to disable tracing and debugging before pushing your application to production; and it’s still important to require SSL for authentication cookies.
In this month’s column, I’d like to pick up where that article left off and discuss some of the more obscure but equally serious security misconfigurations. I’m also going to take a look at a new free tool from the Microsoft Information Security Tools team called the Web Application Configuration Analyzer that can help find these problems. Remember, even the most securely coded ASP.NET application can be hacked if it isn’t configured correctly.
One of the more common mistakes I see developers make is that they give users a list of choices and then assume the users will, in fact, choose one of those values. It seems logical enough: If you add a ListBox control to a page and then pre-populate it with the list of all states in the United States, you’d expect to get back “Washington” or “Georgia” or “Texas”; you wouldn’t expect “Foo” or “!@#$%” or “<script>alert(document.cookie);</script>”. There may not be a way to specify values like this by using the application in the traditional way, with a browser, but there are plenty of ways to access Web applications without using a browser at all! With a Web proxy tool such as Eric Lawrence’s Fiddler (which remains one of my favorite tools for finding security vulnerabilities in Web applications and can be downloaded from fiddler2.com), you can send any value you want for any form field. If your application isn’t prepared for this possibility, it can fail in potentially dangerous ways.
The EnableEventValidation configuration setting is a defense-in-depth mechanism to help defend against attacks of this nature. If a malicious user tries to send an unexpected value for a control that accepts a finite list of values (such as a ListBox—but not such as a TextBox, which can already accept any value), the application will detect the tampering and throw an exception.
The membership provider framework supplied as part of ASP.NET (starting in ASP.NET 2.0) is a great feature that keeps developers from having to reinvent the membership-functionality wheel time and time again. In general, the built-in providers are quite good from a security perspective when left in their default settings. However, if the membership configuration settings are changed, they can become significantly less secure.
One good example of this is the PasswordFormat setting, which determines how user passwords are stored. You have three choices: Clear, which stores passwords in plaintext; Encrypted, which encrypts the passwords before storing them; and Hashed, which stores hashes of the passwords instead of the passwords themselves. Of these choices, Clear is clearly the worst. It’s never appropriate to store passwords in plaintext. A much better choice is Encrypted, and the best choice is Hashed, because the best way to store a secret is not to store it at all. However, because there’s no way to retrieve the original password from a hash, if a user forgets his or her password, you won’t be able to recover it for him.
<configuration> <system.web> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" passwordFormat="Clear" ... />
<configuration> <system.web> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" passwordFormat="Encrypted" ... />
MinRequiredPasswordLength and MinRequiredNonalphanumericCharacters
There are two values of the membership settings that should be changed from their defaults: the MinRequiredPasswordLength and MinRequiredNonalphanumericCharacters properties. For AspNetSqlMembershipProvider objects, these settings default to a minimum required password length of six characters, with no non-alphanumeric characters required. For better security, these settings should be set much higher. You should require at least a 10-character-long password, with two or more non-alphanumeric characters. A 14-character minimum with four or more non-alphanumeric characters would be better still.
It’s true that password length and complexity are dual-edged swords: When you require your users to set longer and more complex passwords, there’s less of a chance those passwords will fall to brute-force attacks, but there’s also a correspondingly greater chance that your users won’t be able to remember their passwords and will be forced to write them down. However, while this sounds like a horrible potential security hole, many security experts believe the benefits outweigh the risks. Noted security guru Bruce Schneier, for one, suggests that users create long, complex passwords and store them in their purse or wallet, as this is a place where people are used to securing small pieces of paper.
<configuration> <system.web> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" ... />
<configuration> <system.web> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" minRequiredPasswordLength="14" minRequiredNonalphanumericCharacters="4" ... />
The Microsoft Online Safety site (microsoft.com/protect/fraud/passwords/create.aspx) also suggests that users should write their passwords down, and it has additional information on creating and securing strong passwords.
Cross-site scripting (XSS) continues to be the most common Web vulnerability. A report published by Cenzic Inc. in July found that in the first half of the year, XSS vulnerabilities accounted for 28 percent of all Web attacks. Given the potentially severe consequences of an XSS vulnerability—I’ve often called XSS “the buffer overflow of the Web” in the past—it’s only logical that developers should do whatever they can to help defend their applications against this attack. It’s especially nice when you get a defense that basically costs you nothing, and that’s what ValidateRequest is.
ValidateRequest works by testing user input for the presence of common attack patterns, such as whether the input string contains angle brackets (<). If it does, the application throws an exception and stops processing the request. While this isn’t a complete solution in and of itself—you should also always apply output encoding and input validation/sanitization logic, such as is built into the Microsoft Web Protection Library—ValidateRequest does block many types of popular XSS attacks. It’s best to leave ValidateRequest enabled whenever possible.
It’s rarely a good idea to allow users to make arbitrarily large HTTP requests to your application. Doing so opens you to denial-of-service (DoS) attacks, where a single attacker could use up all your bandwidth, processor cycles or disk space and make your application unavailable to any of the other legitimate users you’re trying to reach.
To help prevent this, you can set the MaxRequestLength property setting to an appropriately small value. The default value is 4096KB (4MB). Because different applications have different requirements as to what their usual and exceptional request sizes are, it’s difficult to make a good rule of thumb about what the MaxRequestLength value should be set to. So, instead of giving examples of what “bad” and “good” settings would be, I just suggest that you keep in mind the fact that the higher you set this value, the more you put yourself at risk for a DoS attack:
If you’re deploying your application in a server farm environment, it’s also important to remember to manually specify a key for the MAC rather than letting the application auto-generate random keys. (If you don’t manually specify keys, each machine in the farm will auto-generate a different key, and the view state MAC created by any of the machines will be considered invalid and will be blocked by any of the other machines.)
There are a few additional guidelines you should follow when manually creating keys to ensure maximum security for your view state. First, be sure to specify one of the SDL-approved cryptographic algorithms. For applications using the Microsoft .NET Framework 3.5 or earlier, this means using either SHA1 (which is the default algorithm) or AES. For applications using the .NET Framework 4, you can also use HMACSHA256, HMACSHA384 or HMACSHA512. Avoid weak algorithms such as MD5.
It’s just as important to choose a strong key as it is to choose a strong algorithm. Use a cryptographically strong random-number generator to generate a 64-byte key (128-byte if you’re using HMACSHA384 or HMACSHA512 as your key algorithm). Reference sample code to generate appropriate keys is provided in the July 2010 Security Briefs column I mentioned earlier.
Just as you should apply a MAC to your application’s view state to keep potential attackers from tampering with it, you should also encrypt the view state to keep them from reading it. Unless you’re 100 percent sure there’s no sensitive information in any of your view state, it’s safest to set the ViewStateEncryptionMode property to encrypt and protect it.
Again, just as with EnableViewStateMac, you have your choice of several cryptographic algorithms the application will use to encrypt the view state. However, it’s best to stick with AES, which is the only available algorithm currently approved by the SDL Cryptographic Standards.
Finally, remember that if you’re deploying your application in a server farm, you’ll need to manually specify a key. Make sure to set the key value to a 24-byte cryptographically random value.
When developers are frustrated enough by a difficult bug, they’ll often implement any change they read about that fixes the problem without really understanding what they’re doing to their application. The UseUnsafeHeaderParsing setting is a great example of this phenomenon. While the word “unsafe” in the property name alone should be enough to throw up a red flag for most people, a quick Internet search reveals literally thousands of results suggesting developers enable this property. If you do enable UseUnsafeHeaderParsing, your application will ignore many of the HTTP RFC specifications and attempt to parse malformed requests. While doing so can allow your application to work with HTTP clients that disobey HTTP standards (which is why so many people suggest it as a problem fix), it can also open your application to malformed header attacks. Play it safe and leave this setting disabled.
Web Application Configuration Analyzer (WACA)
Now that we’ve taken a look at some dangerous configuration settings, let’s take a look at a tool that can help automate finding these settings in your code. After all, while manual code review can be useful, automated analysis can be more thorough and more consistent. You’ll also save yourself from the drudgery of hand-reviewing XML files and leave yourself more time to solve more-interesting problems!
The Microsoft Information Security Tools team has released some excellent security tools, including two—AntiXSS/Web Protection Library and CAT.NET—that we’ve made mandatory for all internal .NET Framework Microsoft products and services as part of the Microsoft SDL. Its latest release, WACA, is designed to detect potentially dangerous misconfigurations, such as the ones I talked about in this article and in my earlier article on the top 10 most common web.config vulnerabilities. Some examples of WACA checks include:
- Is tracing enabled?
- Is MaxRequestLength too large?
- Are HttpOnly cookies disabled?
- Is SSL required for forms authentication login?
- Is EnableViewStateMac attribute set to false?
In addition, WACA can also check for misconfigurations in IIS itself, as well as SQL database misconfigurations and even system-level issues. Some examples include:
- Is the Windows Firewall service disabled?
- Is the local admin named “Administrator”?
- Is the IIS log file on the system drive?
- Is execute enabled on the application virtual directory?
- Are sample databases present on the SQL server?
- Is xp_cmdshell enabled on the SQL server?
While developers and testers will probably use WACA mostly for checking their applications’ configuration settings, systems administrators and database administrators will find value in using WACA to check IIS, SQL and system settings (see Figure 1). In all, there are more than 140 checks in WACA derived from SDL requirements and patterns & practices coding guidelines.
Figure 1 Web Application Configuration Analyzer Rules
One more really handy feature of WACA is that you can automatically create work items or bugs in Team Foundation Server (TFS) team projects from WACA scan results. This is especially useful when you use it with a team project created from either the SDL process template or the MSF-Agile+SDL process template. From the WACA TFS setup page, map the template field “Origin” to the value “Web Application Configuration Analyzer.” Now when you view your bug reports and trend charts, you’ll be able to filter and drill down into the WACA results to see how effective it’s been at detecting potential vulnerabilities (see Figure 2).
Figure 2 WACA Team Foundation Server Integration
You can read more about WACA on the Microsoft IT InfoSec group’s page (msdn.microsoft.com/security/dd547422); watch a video demonstration of the tool presented by Anil Revuru, program manager for the WACA project (msdn.microsoft.com/security/ee909463).
Always Check Your Settings
It’s frustrating to think that you could develop your application following every secure development guideline and best practice and still end up hacked because of a simple mistake in a web.config configuration file. It’s even more frustrating when you realize that web.config files are designed to be changed at any time and that the configuration mistake could come years after you’ve finished coding the application and moved it to production. It’s important to always check your configuration settings—not just by manual inspection, but with automated tools, and not just during the development lifecycle, but also in production.
Follow-up on Regular Expression DoS Attacks
On a completely different topic: In the May 2010 Security Briefs column (msdn.microsoft.com/magazine/ff646973), I wrote about the regular expression DoS attack demonstrated by Checkmarx at the OWASP Israel conference in September 2009. In that column, I also provided code for a regex DoS fuzzer based on the Visual Studio Database Projects Data Generation Plan functionality. Although this approach was technically sound and worked well to detect regex vulnerabilities, it was admittedly somewhat tedious to generate the test data, and it did require you to own a license of Visual Studio Database Projects. So I’m happy to report that the SDL team has released a new, freely downloadable tool to fuzz for regex vulnerabilities that takes care of the data generation details for you. The tool has no external dependencies (other than .NET Framework 3.5). It’s shown in Figure 3.
Figure 3 SDL Regex Fuzzer
You can download SDL Regex Fuzzer from microsoft.com/sdl. Give it a try and let us know what you think.
Bryan Sullivan is a security program manager for the Microsoft Security Development Lifecycle team, where he specializes in Web application and Microsoft .NET Framework security issues. He’s the author of “Ajax Security” (Addison-Wesley, 2007).
Thanks to the following technical expert for reviewing this article: Anil Revuru