Export (0) Print
Expand All

Guidance on Patterns & Practices: Security

 

Keith Pleas
Microsoft Corporation

January 2005

Applies to:
   Patterns & Practices Security
   Microsoft .NET Framework

Summary: Create secure applications on the Microsoft platform with patterns & practices guides described in this article. (14 printed pages)

Contents

Introduction
Patterns & Practices Security Guides
The Sample Applications
The "Real World"
Commercial Hosting
Conclusion

Introduction

What do books, spy equipment, stocks, bicycles, and pet supplies have in common? All have been used by Microsoft as the basis for business-to-consumer Web-based samples applications. While two of these sample applications—Duwamish Books and Fitch & Mather Stocks (F&M)—originated in the classic COM era and, after a number of updates, now ship with the Enterprise versions of Microsoft Visual Studio .NET, an astoundingly large number of real commercial applications have been built on the IBuySpy e-commerce and related portal samples. And while PetShop 3.0 claims to demonstrate "an enterprise architecture for building .NET Web Applications" that "follows Microsoft Prescriptive Architecture Guidelines," it is doubtful that many production applications have been built using it: Its entire reason for existence is to be a .NET version of the Sun Microsystems Pet Store application.

While almost all e-commerce applications share a number of common design characteristics—for example, catalog browsing, validated logon, and shopping carts—what tends to be a differentiating factor (other than the products sold) is their treatment of customer-specific data. And customer data goes hand-in-hand with security. Many developers assume that these sample applications—designed to serve as templates for developers writing real-world applications—would showcase the "best practices" for such important non-functional aspects as performance, scalability, and security. Unfortunately, in order to be easily installed and examined on developers' workstations, these samples incorporate a number of omissions and simplifications, and this is particularly true in the area of security. While additional security measures are described in the documentation that accompanies these samples, this critical information is easy to overlook.

Patterns & Practices Security Guides

So developers are using Microsoft sample applications—often created by vendors—that are more educational demonstrations than true templates for real-world business applications. Not that the creators of these applications are entirely to blame: Up until recently, there has been little comprehensive guidance as to what the "best practices" are for the application lifecycle. One thing that developers (and designers) are increasingly focusing on is "design patterns," which form a large part of the Microsoft patterns & practices guidance where "patterns" generally refers to design patterns and "practices" to operational practices. This guidance is available free of charge in electronic form—typically as PDF documents—on the patterns & practices Web site. Printed versions can also be ordered in book form here.

In the area of application security, two patterns & practices publications are particularly relevant:

  • Building Secure Microsoft ASP.NET Applications
  • Improving Web Application Security: Threats and Countermeasures

While these guides offer a tremendous amount of useful information, they necessarily reflect some simplifying assumptions that might not be appropriate for your scenarios. One of those assumptions is that the people deploying and administering an application have administrative control over the machines in the target infrastructure, and that is almost never the case. Hosted applications, in particular, typically reside on machines with significantly greater levels of security—and correspondingly reduced privileges—available to application "owners."

Building Secure Web Applications

The first security guide to come from the patterns & practices effort was Building Secure Microsoft ASP.Net Applications. Written while version 1.1 of the Microsoft .NET Framework and Microsoft Windows Server 2003 were nearing release but still in beta, this document actually does not describe any "design patterns" for security. Rather, it describes two different approaches to resource-access security:

  • The trusted subsystem model
  • The impersonation/delegation model

In general, the patterns & practices guidance recommends the "trusted subsystem" model. The "impersonation/delegation" model—which impersonates original callers, and delegates on their behalf—clearly implies that you are also managing user-level account information in the database's security system (so you are likely using SQL authentication for SQL Server). In a "trusted subsystem" model, the original caller is mapped to a role and then authorized for resource (including database) access based on role membership. This model allows using service accounts, including the Microsoft ASP.NET account in the case of a Web application. This also implies Windows authentication to the database.

The trusted subsystem model supports connection pooling, a key requirement for achieving high application scalability. Also, users cannot access the data directly and maintenance of security information in the database is minimized.

Within each of those two approaches, there is a recommended method for authorizing resource access. The general "pattern" described for the trusted subsystem model can be summarized as:

  1. Authenticate users
  2. Map users to roles
  3. Authorize based on role membership
  4. Access downstream resource manager using a fixed trusted identity.

The equivalent "pattern" for the impersonation/delegation model is not called out explicitly, but would be equivalent to:

  1. Authenticate users
  2. Access downstream resources using client identity, using impersonation for local (same machine) resources and delegation for resources on remote machines.

There are advantages and disadvantages to each resource-access model, but it may not be up to the application designer or developer to choose which to use. Rather, non-functional requirements may influence the required approach. For instance, auditing requirements may dictate that detailed resource access logs be kept that show specific users, the details of which are masked in the trusted subsystem model. Similarly, the application may be deployed to a hosted environment where there is no access to system features such as service accounts and—for SQL Server—Windows integrated security is not allowed. On the other hand, if performance is a primary concern, then the trusted subsystem model may be preferred since it can offer increased performance and scalability—by implementing such features as caching and connection pooling—that only make sense in the "aggregated" identity of the trusted subsystem.

To further complicate matters, the existing IT infrastructure itself may impact the preferred approach. For example, impersonation is not recommended under Windows 2000 because the ASP.NET account must be granted the "Act as part of the operating system" privilege, which is considered inherently insecure. This requirement was relaxed in Windows Server 2003—along with changing the process model—and applications deployed on this and later versions of the server operating system can use either approach without being exposed to this particular security problem. Also, IIS 6.0 (part of Windows Server 2003) provides increased process isolation, as well as using the more secure Network Service user account rather than the ASP.NET account used previously.

Whichever approach is chosen, both start with authenticating users. And here, too, there are choices to make: ASP.NET supports four modes of authentication—None, Forms, Windows, and Passport—and for the ASP.NET Windows authentication there are five corresponding modes for IIS authentication—Anonymous, Basic, Digest, Integrated Windows, and Custom. Custom IIS authentication can also be used with ASP.NET authentication set to "None." Obviously, the "best" approach depends on the application's requirements as well as the infrastructure. And that's just for authentication: Authorization of resource access can involve additional complexity. A general authorization "pattern" includes the following steps:

  1. Retrieve credentials
  2. Validate credentials
  3. Put users in roles
  4. Create an IPrincipal object
  5. Put the IPringipal object into the current HTTP context
  6. Authorize based on the user identity/role membership

All except the last step are handled automatically by ASP.NET if Windows authentication is chosen.

Each of these authorization and authentication methods is described in Building Secure Microsoft ASP.Net Applications, which also includes a number of "How Tos" at the end of the document, such as "How To: Use Forms Authentication with SQL Server 2000," and "How To: Create an Encryption Library."

Improving Web Application Security

The second volume of security guidance from patterns & practices integrates the infrastructure (or ITPRO, in Microsoft parlance) with the developer. Or, to put it another way, it integrates the worlds of those people whose Microsoft support story starts with TechNet with those who start with MSDN. And this is a great thing, because developers must learn more about how these applications are actually deployed to be able to build and test in such an environment. Ideally, developers would also have a complete development system to work with, but maybe we should shelve that topic for another time.

Most systems address keeping data secure in the database and—somewhat less frequently—as it travels over the network by either securing the channel (typically using SSL) or encrypting the data. What is often overlooked, however, is securing the data inside the running application. In a Web application, sensitive data is normally tied to a client as part of the session state (as well as corresponding page and control view state, if implemented). Each ASP.NET session is associated with a randomly generated 120-bit SessionID string, and protecting that string is critical to securing the application from unauthorized intrusion. The SessionID is typically passed in an HTTP cookie or modified URL, so you can imagine how easy it might be to intercept this value over an unencrypted network connection.

Improving Web Application Security introduces the new term "session token" to refer to the SessionID. This is a rather novel use of the word "token," but at least the meaning is clear. This same document also uses "authentication token" to refer to the forms authentication cookie. It is important not to confuse these "tokens" with what both the .NET Framework SDK and Platform SDKs (soon to be merged together) call an "access token." Technically, an access token is either a "primary token" or an "impersonation token" and contains the security information for a logon session. It identifies the user, the user's groups, and the user's privileges.

Creative terminology aside, Improving Web Application Security does provide some solid security advice. For session management using forms authentication, it recommends (combining lists from Part 2 - Designing Secure Web Applications and Part 3 - Building Secure Web Applications):

  • Secure sensitive pages using SSL and require authentication.
  • Protect session state from unauthorized access.
  • Do not rely on client-side state management options.
  • Do not mix session tokens and authentication tokens.
  • Use SSL to protect session authentication cookies.
  • Encrypt the contents of the authentication cookies.
  • Limit session lifetime.

The fifth recommendation should actually have been "Use SSL to protect session and authentication cookies" to be consistent with the practice of separating the session from the secure (authenticated) part of the application. Why would you want to draw this distinction? First, you need to ensure that obtaining a SessionID alone is not sufficient to access the secure pages. Secure pages should require the forms authentication token, which is only passed over a secure connection when requireSSL="true" is set on the <forms> element. Second, this enables separate management of the forms authentication cookie, which may contain sensitive user data in the form of multiple key-and-value pairs. The forms authentication cookie may be configured for security, persistence, expiration, and domain/path.

Clearly, all of these recommendations are valid practices for increasing session security. However, they also depend on what is provided or allowed by the infrastructure. They also have varying levels of "cost" associated with them in terms of development, testing, and management. While requiring SSL is a simple configuration setting, SSL certificates themselves are not free unless you are using your own root certificate authority (Root CA). If you do, however, your Root CA would not be recognized by your users unless your administrator had pre-populated their list of certifying authorities using the Internet Explorer Administration Kit (IEAK), or they manually chose to accept your root certificate.

Session lifetime can be configured in the <sessionState> element of machine.config and your application's Web.config. The default is 20 minutes. You should also limit cookie lifetime. The default cookie timeout—configured in the <forms> element—is 30 minutes, and the patterns & practice guidance suggests that this timeout value be reduced to 10 minutes. This will limit the window of opportunity for cookie replay attacks.

On the server, applications must be aware of using sensitive user data. In general, this data should be retrieved only when needed, and discarded as soon as it is used.

Rather than storing sensitive data, a common practice is to store salted hashes of the data, which can be used for validation. And if the sensitive data is sent over a communications channel, that channel should be secured or the data itself should be encrypted. This requirement commonly applies to data sent to and from Web services.

In a Web farm environment, application session state is generally maintained across all servers—in a durable store such as a database, using a custom service, or using the State Server with ASP.NET. If it is not shared across all servers, then it is necessary—through hardware or software—to return subsequent session requests to the same physical server machine. Unfortunately, in practice this session affinity can dramatically impact performance.

The Sample Applications

Now that we have some "best practices" security guidance from Microsoft, we can ask: How well do the sample applications measure up? Let's do a "security review" of Duwamish and F&M and see for ourselves.

First of all, both Duwamish and F&M use forms authentication. This is not necessarily a poor approach, since it works with all common browser types and doesn't require Windows 2000 or 2003 as a server operating system. This is handy indeed for a sample application that can be examined by developers—or press reviewers or analysts, for that matter. Of course, it also means that the application can't demonstrate any server-specific security techniques. In particular, neither application uses role-based security or the trusted subsystem approach, though the accompanying documentation does have a section on implementing role-based security.

However, even though these two sample applications use forms authentication, they should still demonstrate common security guidance. For instance, any "real-world" application would normally encrypt authentication cookies and use SSL for any sensitive pages. These sensitive pages are commonly placed in a secure subdirectory that uses SSL: This minimizes the performance impact of using SSL.

Unfortunately, both Duwamish and F&M did not—in their own words—take a "real world" approach to security. And there's the rub: they were designed for developers, and creating the "real world" architecture was not considered practical. That said, there is a substantial step from the samples as provided and the "real world" applications that they are often assumed to be.

For example, neither sample reflects the guidance around encrypting authentication cookies described above. Improving Web Application Security also recommends that applications specify unique name and path attributes in the <forms> element, which neither sample implements. And for SSL, the documentation for both samples do mention using SSL for encryption, but this is really only possible in Duwamish, which puts "sensitive" pages in a "\secure" vroot and includes a configuration setting for SSL enablement, which then redirects "secure" checkout requests to the separate SSL-enabled vroot. F&M simplifies securing communications even more and includes a "BehindTheScenes" comment that says:

"Security Note - A deployed Internet application would post back login 
credentials using SSL to an HTTPS URL. Fitch and Mather 7.0 passes clear 
text passwords that anyone could obtain. Prior to deployment, it would be 
essential to address this serious security flaw"

Since both Duwamish and F&M use SQL Server for their database, you might expect that they would illustrate good security principles for data access. Well, the user account in SQL Server for Duwamish is called "Duwamish7vb_login" (for the Visual Basic version) with a password of "password_03F47A8338304FD68A5B," which is certainly more secure than the connection string used for the session state:

sqlConnectionString="data source=127.0.0.1;user id=sa;password="

Ironically, this connection is not used since the sessionState mode is set to "InProc" by default, but the documentation explicitly calls out how to change it for distributed deployment. Also, certainly for convenience but a poor practice nevertheless, all passwords are stored in clear text in the Web.Config file.

Some of the patterns & practices security guidelines seem inappropriate for sample applications. For example, Improving Web Application Security advises using absolute URLs for navigation redirects. Fortunately, this can be achieved not by statically coding the entire URL, but by constructing a URL:

// Form an absolute path using the server name and v-dir name
string serverName = HttpUtility.UrlEncode(Request.ServerVariables["SERVER_NAME"]);
string vdirName = Request.ApplicationPath;
Response.Redirect("https://" + serverName + vdirName + "/Restricted/Login.aspx");

Duwamish uses equivalent code, appropriately located on the PageBase class. This base class allows all pages in the application to share common processing logic. The general design pattern for this base class is the "Page Controller," which is a refinement of the "Model-View-Controller" design pattern underlying the code-behind model of ASP.NET page processing. More information about these design patterns can be found in the patterns & practices document Enterprise Solution Patterns Using Microsoft .NET. Anyway, all pages in Duwamish use the following properties:

Private ReadOnly Shared Property UrlSuffix As String
   Get
      UrlSuffix = HttpContext.Current.Request.Url.Host +
         HttpContext.Current.Request.ApplicationPath
   End Get
End Property

Public ReadOnly Shared Property UrlBase As String 
   Get
      UrlBase = "http://" & UrlSuffix 
   End Get
End Property

Access restriction in Duwamish and F&M is likewise a mixed story, with Duwamish doing a better job. In addition to the SSL issues already mentioned, the patterns & practices guidelines state that applications should use <authorization> elements to restrict access and force login. In the base application directory, Duwamish uses the following (mostly acceptable) security configuration:

<authentication mode="Forms">
   <forms name=".ADUAUTH" loginUrl="secure\logon.aspx" protection="All">
   </forms>
</authentication>
<authorization>
   <allow users="*" />
</authorization>

The protection="All" setting enforces both privacy (encryption) and integrity (data validation) of the forms authentication cookie. The name is set, but there's no path that create a security hole with another application on a hosted server. Also, the timeout (and whether it is a slidingExpiration that is reset with each request) is not specified, which means that the cookie will live for 30 minutes: This timeout should probably be reduced, and you should also be aware that the default for slidingExpiration changed from true in version 1.0 of ASP.NET to false in version 1.1.

In the "secure" vroot, Duwamish then uses this security configuration:

<authorization>
   <deny users="?" />
   <allow users="*" />
</authorization>

Thus, the "secure" web directory first denies anonymous users, which is also a good practice.

Now let's look at F&M. Here's the corresponding root security configuration:

<authentication mode="Forms">
   <forms name="FMStocks7Auth" loginUrl="Login.aspx">
   </forms>
</authentication>

There is no protection (which, fortunately, defaults to All), no path, and no (non-default) timeout. Unfortunately, F&M doesn't have a "secure" vroot and so it denies anonymous users on a per-page basis (as shown in the following example), which is much more likely to lead to configuration mistakes:

  <!-- Mark the AccountSummary.aspx page as available only to authorized users -->
  <location path="AccountSummary.aspx">
    <system.web>
      <authorization>
        <deny users="?" />
      </authorization>
    </system.web>
  </location>

If that seems less than secure, then take a look at the security configuration for PetShop:

<authentication mode="Forms">
   <forms name="PetShopAuth" loginUrl="SignIn.aspx" protection="None" timeout="60" />
</authentication>

Note the protection="None" setting. All is the default—and recommended—value, combining for encryption (for privacy) and validation (for integrity). So this application is open to cookie tampering and replay attacks. Implementing SSL would at least eliminate capturing cookies using network monitoring. PetShop also denies anonymous users on a per-page basis, which (again) is more likely to lead to configuration mistakes.

The "Real World"

As we noted throughout the discussion above on "best practices" for security, the infrastructure onto which the application is deployed can have a significant effect on the design decisions made during development. This assumes, of course, that the development staff is even aware of how the infrastructure is configured!

So, what are some of these "real world" limitations? Well, for a recent customer project for a major corporation, the IT department would not install any beta version of any operating system, application framework (meaning, in this case, the .NET Runtime), or supporting applications such as SQL Server. All machines in the data center—in the Web, application, and data tiers—were not configured to be part of a domain, and thus there was no Active Directory available. Direct file access was not allowed. Any network access off of the machine was not allowed. We couldn't write to the system registry, require any other software product, or use any COM-based components. So, clearly we couldn't use Serviced Components. We also couldn't use DPAPI—a COM-based security library that first shipped with Microsoft Windows XP and recently with Windows Server 2003—to store user names and passwords in the registry in encrypted form. Each application could have its own database, so the only SQL Server security option was SQL authentication. They had also standardized on Windows Server 2003 (with IIS 6.0) because of the improved application isolation.

Of course, we couldn't actually connect to the machines either (or use FTP, or mount a drive). Instead, files were pushed through a custom publication tool. And if that file was a DLL, then it wouldn't get propagated either. After a little bit of negotiation, we were able to get the State Server turned on for the Web "cluster," otherwise it would have to have been the shared SQL Server (no "affinity" solutions allowed either).

Commercial Hosting

Commercial ASP.NET hosts may actually be a more flexible alternative than in-house IT departments. However, they are also extremely sensitive to security so most of their requirements parallel those of the very-restrictive corporate infrastructure mentioned above.

At the low-end of the hosting world, anarchy rules. No logons are permitted; that would require a domain account, which is out of the question. No COM, no Serviced Components, no network ports/protocols other than HTTP over port 80, FTP over port 21, and—if you're lucky—SSL over port 143. SSL requires a certificate for back end authentication, and server certificates aren't free. Maybe you'll have access to an SMTP service, requiring authentication so it isn't turned into an open relay. Agree to the terms, pay your money, and you're good to go.

So, what does this mean for application design? For one, impersonation/delegation is the only possible model: You will not be able to override it.

impersonate=true, which forces you to run as an anonymous user. Fortunately, this is isolated from the anonymous users in other accounts. You will also likely end up with a large number of fixed identities in your Web.config, for example:

<identity impersonate="true" userName="bob" password="inClearText"/>

While you would normally secure your Web.config file with an access control list (ACL) to limit visibility and tampering, this is not usually possible with application hosts since it involves configuring domain (including local domain) security.

Your database had better be secure: There's no effective way to turn off SQL injection attacks in a hosted environment. Also, you are not the database administrator, so you need to encrypt all sensitive data in the database. Your server probably will have FrontPage extensions—they're required by the Microsoft commercial hosting program—but be aware that FrontPage extensions don't work with the IISLockDown tool. This is truly unfortunate, since running the IISLockDown tool is probably the single best step you can take to secure a Windows 2000/Windows 2003 Web server.

Code Access Security (CAS)

So far we've focused on the identity, authentication, and authorization of the user (or service account in the case of the trusted subsystem model). However, the .NET Framework also provides extensive support for identifying and securing code through a system called code access security (CAS). In short, CAS makes it possible to give code an identity—enforced through strong-naming with a cryptographic key pair—and to restrict what code can do through a configurable set of security permissions known as a security policy.

While CAS was part of the .NET Framework from the beginning, support for partially trusted code—which allows configuring trust levels—was only enabled starting with ASP.NET version 1.1. Partially trusted code allows a so-called "resource constraint model" that can be configured to restrict access to specific resources—such as the file system, system registry, and directory services. Since ASP.NET 1.0 did not include support for partially trusted code, all ASP.NET 1.0 applications run with full trust.

While configuring, testing, and managing partially trusted code is a complex and difficult process today, identifying code—and restricting which code can call it—is a much simpler process. Perhaps the easiest way to visualize how useful this is, is to imagine an accounting program (or even a personal checkbook program) that is factored into several components (called assemblies in .NET). Enforcing code identity means that a utility component—which might have properties that return sensitive information like account numbers, or methods that do critical tasks like transfer money—can only be called from an assembly signed with a specific strong name. This prevents the utility assembly from being "hijacked" by other code to perform inappropriate actions. Be aware, however, that link demands only test the immediate caller.

Enforcing caller identity is the "low hanging fruit" of CAS. It's relatively easy for a developer to implement, requiring only a strongly named caller and a link demand for the corresponding security key in the called code. Caller identity is also simple to test: Simply call the component from any unsigned code and test for a security exception. And best of all, there is no additional operational requirement. Of course, this method only works with pre-compiled components (including those generated using the code-behind compilation model in Visual Studio .NET). Even if it were to work with dynamically generated code, the key pair file would also have to reside in an application's directory, and that would be an extremely poor practice from a security standpoint. While the documentation for Duwamish and F&M both describe the general process for enforcing caller identity through a CAS link demand, this really should have implemented in the samples.

The patterns & practices guidance on code access security tells you how to do it (at a rather high level), but it doesn't tell you when you should implement CAS. In particular, Improving Web Application Security states that developers can "use CAS to further restrict which code can call your code." Two separate chapters in that volume show developers how to implement—and administrators how to configure—CAS. One unstated but critical implication is that this implies a high level of coordination between the developer and operations side, which is not practical in many environments.

Improving Web Application Security also notes that resource access code should be located in separate assemblies, which allows policy to be applied at an even more granular level. That document then goes on to blithely note that "this requires a certain amount of reengineering." Ironically, you may instrument your application to log failures—using the event log or a database—but your operations staff may not actually grant permission for the application to do this!

You should investigate CAS if your infrastructure permits it, but you should always deploy it using strongly signed assemblies and link demands. One particular advantage to using link demands is that it is not possible to circumvent this—accidentally or on purpose—by changing trustLevel or setting the PermissionSet to Full.

Conclusion

A wise man once said "security should be cheap for your friends, expensive for your enemies." When this concept is applied to something as simple as choosing passwords, it means that users should not have to deal with impossible to remember passwords because they will either not use the application (of course, this is not an option for many) or else write the password down and put it in a convenient place (like a sticky note on their monitor).

It should be easy to "do the right thing" when developing secure software. Unfortunately, it is a difficult and complex process. The two patterns & practices guides to Web application security cover just several facets of security and run to a combined total of 1400 pages. The book Writing Secure Code from Microsoft Press—touted as "Required reading at Microsoft" and distributed to all attendees at a recent Microsoft Professional Developers Conference (PDC)—is another 800 pages, with only a single chapter specifically about writing .NET code. And there are few tools available that help test for security issues. So it hardly comes as a surprise that developers have turned to what are essentially marketing demos to use as templates when developing their own applications. Unfortunately, the simplifying assumptions of these demos—even when they are highlighted in the documentation—get lost in the shuffle.

A familiar aphorism states that "If all you've got is a hammer, everything looks like a nail." Well then, is it any surprise that a lot of production Web applications look like the demo samples? What is clearly needed is a new generation of real-world sample applications that are designed and built using the "best practices" not just for security (our focus here), but for robustness, scalability, testing, and deployment; in fact, of all phases of the software development lifecycle. It is also important to recognize that this guidance will improve over time until it ultimately becomes part of the underlying platform.

The first such sample application is a reference solution that implements a service-oriented architecture (SOA) and includes input from a large interested community through a GotDotNet workspace. Eventually, more of these "real world" applications—covering additional scenarios and reflecting current thinking—will be available to replace the simplified demo applications that developers have become familiar with. Be sure to keep an eye on the Microsoft patterns & practices homepage, as new applications will surely surface.

Show:
© 2014 Microsoft