September 2010

Volume 25 Number 09

Security Briefs - The MSF-Agile+SDL Process Template for TFS 2010

By Bryan Sullivan | September 2010

Bryan SullivanAnyone who is a regular reader of this magazine will be familiar with Microsoft Team Foundation Server (TFS) and the productivit benefits development teams can get from using it. (If you’re not familiar with TFS, check out the article by Chris Menegay from our Visual Studio 2005 Guided Tour. While it covers an older version, it does give a useful overview of the kinds of features you can leverage in TFS.)

If you use TFS yourself, you’re probably also familiar with the Microsoft Solutions Framework (MSF) for Agile Software Development process template—better known as MSF-Agile—that ships with TFS. The topic of this month’s column is the new MSF-Agile plus Security Development Lifecycle (SDL) process template. MSF-Agile+SDL builds on the MSF-Agile template and adds SDL security and privacy features to the development process.

You can download the MSF-Agile+SDL template for Visual Studio Team System 2008 or 2010 from microsoft.com/sdl. Before you download it, however, you’ll probably want to know what’s been built into it. So let’s get started.

SDL Tasks

The core of the SDL is its security requirements and recommendations—activities that dev teams must perform throughout the development lifecycle in order to ensure better security and privacy in the final product. These requirements include policy activities such as creating a security incident response plan, as well as technical activities such as threat modeling and performing static vulnerability analysis. All of these activities are represented in the MSF-Agile+SDL template as SDL task work items.

Probably the biggest difference between SDL tasks and the standard work items that represent functional tasks is that project team members are not meant to directly create SDL tasks themselves. Some SDL tasks are automatically created when the Team Project is first created. These are relatively straightforward, one-time-only security tasks, such as identifying the member of your team who will serve as the primary security contact. Other SDL tasks are automatically created by the process template in response to user actions. (More specifically, they’re automatically created by the SDL-Agile controller Web service that gets deployed to the TFS application tier.)

Whenever a user adds a new iteration to the project, the template adds new SDL tasks to the project that represent the security tasks to be performed during that iteration. A good example of a per-iteration SDL task is threat modeling: The team must assess the changes it makes over the course of the iteration to identify potential new threats and mitigations.

Finally, whenever a user checks a new Visual Studio project or Web site into the Team Source Control repository, the template adds SDL tasks to reflect the security work that must be done specifically for that project. For example, whenever a new C or C++ project is added, SDL tasks are created to ensure the use of buffer overflow defense compiler and linker settings, such as the /dynamic­base flag for address space layout randomization and the /gs flag for buffer security check.

The template is sophisticated enough to recognize the difference between native C/C++ projects and Microsoft .NET Framework projects, and it won’t add invalid requirements. The /dynamicbase and /gs flags are meaningless to C# code and those SDL tasks won’t be created for C# projects. Instead, C# projects get .NET-specific security tasks such as reviewing any use of AllowPartiallyTrustedCallersAttribute. Likewise, the template can also distinguish client/server and standalone desktop applications from Web sites and Web services, and will add only the appropriate set of SDL tasks accordingly.

SDL Task Workflow and Exceptions

The state and reason workflow transitions for SDL tasks are also different from those of functional tasks. A task can be marked as closed for several different reasons: completed, cut from the project, deferred to a later iteration, or even obsolete and no longer relevant to the project. Of these reasons, only completed is applicable to SDL tasks.

Teams that follow the SDL can’t simply cut security and privacy requirements from their projects. Functional requirements can be horse-traded in and out of projects for technical or business reasons, but security requirements must be held to a higher standard. It’s not impossible to ever skip an SDL task, but a higher level of process must be followed in order to make this happen.

If, for whatever reason, a team can’t complete a required SDL task, it must petition its security advisor for an exception to the task. The team or its management chooses the team’s security advisor at the project’s start. This individual should have experience in application security and privacy, and ideally should not be working directly on the project—he should not be one of the project’s developers, program managers or testers.

At Microsoft, there’s a centralized group of security advisors who work in the Trustworthy Computing Security division. These security advisors then work with the individual product teams directly. If your organization has the resources to create a dedicated pool of security advisors, great. If not, it’s best to select the individual with the strongest background in security.

It’s up to the team’s security advisor to approve or reject any exception request for an SDL task. The team creates an exception request by setting the SDL task state to Exception Requested and filling out the Justification, Resolution Plan, and Resolution Timeframe fields. Each SDL task also has a read-only Exception Rating field that represents the inherent subjective security risk of not completing the requirement, which ranges from 4 (least risk) to 1 (critical; most risk). The security advisor weighs the team’s rationale against the exception rating and either closes the SDL task with a Reason of Approved, or reactivates the SDL task with a Reason of Denied.

However, even if the request is approved, most exceptions don’t last forever. This is where the Resolution Timeframe field comes into play. Teams generally request exceptions for a set number of iterations—usually just one iteration, but sometimes as many as three. Once the specified number of iterations has elapsed, the process template will expire the exception and reactivate the SDL task.

Security Bugs

After ensuring that security and privacy requirements are met, the next most important function of the SDL is to ensure that products don’t ship with known security bugs. Tracking security bugs separately from functional bugs is critical to ensuring the security health of your product.

Unlike with SDL tasks, the MSF-Agile+SDL template doesn’t add a second SDL Bug work item type to distinguish security bugs from functional bugs. Instead, it adds the fields Security Cause and Security Effect to the existing Bug work item type. Whenever a team member files a new bug, if the bug is a strict functional bug with no security implications, the finder simply leaves these fields at their default values of Not a Security Bug. However, if the bug does represent a potential security vulnerability, the finder sets the Security Cause to one of the following values:

  • Arithmetic error
  • Buffer overflow/underflow
  • Cross-Site Scripting
  • Cryptographic weakness
  • Directory traversal
  • Incorrect/no error messages
  • Incorrect/no pathname canonicalization
  • Ineffective secret hiding
  • Race condition
  • SQL/script injection
  • Unlimited resource consumption (denial of service)
  • Weak authentication
  • Weak authorization/inappropriate permission or ACL
  • Other

The finder also sets the Security Effect to one of the STRIDE values:

  • Spoofing
  • Tampering
  • Repudiation
  • Information Disclosure
  • Denial of Service
  • Elevation of Privilege

Finally, the finder can also choose to set the Scope value for the bug. In a nutshell, Scope defines some additional subjective information about the bug that is then used to determine severity. The allowed values for Scope vary based on the selected Security Effect. For example, if you choose Elevation of Privilege for the Security Effect, the possible choices for Scope include:

  • (Client) Remote user has the ability to execute arbitrary code or obtain more privilege than intended.
  • (Client) Remote user has the ability to execute arbitrary code with extensive user action.
  • (Client) Local low-privilege user can elevate himself to another user, administrator or local system.
  • (Server) Remote anonymous user has the ability to execute arbitrary code or obtain more privilege than intended.
  • (Server) Remote authenticated user has the ability to execute arbitrary code or obtain more privilege than intended.
  • (Server) Local authenticated user has the ability to execute arbitrary code or obtain more privilege than intended.

You can see that the axes of severity for Elevation of Privilege vulnerabilities—that is, what characteristics make one Elevation of Privilege worse than another—deal with conditions such as the site of the attack (the client or the server) and the authentication level of the attacker (anonymous or authenticated). However, if you choose a different Security Effect, such as Denial of Service, the Scope choices change to reflect the axes of severity for that particular effect:

  • (Client) Denial of service that requires reinstallation of system and/or components
  • (Client) Denial of service that requires reboot or causes blue screen/bug check
  • (Client) Denial of service that requires restart of application
  • (Server) Denial of service by anonymous user with small amount of data
  • (Server) Denial of service by anonymous user without amplification in default/common install
  • (Server) Denial of service by authenticated user that requires system reboot or reinstallation
  • (Server) Denial of service by authenticated user in default/common install

Once the values for Security Cause, Security Effect, and Scope have all been entered, the template uses this data to calculate a minimum severity for the bug. The user can choose to set the  actual bug severity higher than the minimum bar—for example, to set the bug as a “1–Critical” rather than a “2–High”—but never the other way around. This may seem overly strict, but it avoids the temptation to downgrade bug severity in order to meet ship dates or sprint deadlines.

If you’d like to learn more about the rationale for setting up a more objective bug bar methodology for triaging bugs, read the March 2010 Security Briefs column, “Add a Security Bug Bar to Microsoft Team Foundation Server 2010” (msdn.microsoft.com/magazine/ee336031). The process for adding a bug bar to TFS that I detailed in that article has already been built into the MSF-Agile+SDL template.

Finally, there’s one more optional field in the bug work item. You can use the Origin field to specify the name of the automated security tool that originally found the bug (if any) or you can leave the field at its default value of “User” if a user found the bug through manual code review or testing.

Over time, you’ll collect enough data to determine which of your testing tools are providing the biggest bang for the buck. To make this determination easier, the MSF-Agile+SDL template includes an Excel report called Bugs by Origin that displays a bar chart of vulnerabilities broken out by the Origin field.

You can customize this report to filter the data based on Severity, Security Cause or Security Effect. If you wanted to see which tools work best at finding cross-site scripting vulnerabilities or which tools have found the most critical severity Elevation of Privilege bugs, it’s easy to do so.

Bug Workflow

Just as you can’t defer SDL tasks, you can’t defer any bug with security implications (that is, any bug with its Security Effect set to a value other than Not a Security Bug). The team must request an exception in order to delay fixing any security bug with Severity of “3 – Moderate” or higher.

The process for this is identical to the exception request process for SDL Tasks: a team member sets the status to Exception Requested and enters details for the Justification, Exception Resolution and Exception Timeframe fields. The team’s security advisor then reviews the exception request and either approves it (setting the State to Closed with a Reason of Approved) or denies it (setting the State to Active with a Reason of Denied).

Security Queries and the Security Dashboard

In addition, the MSF-Agile+SDL template also includes several new team queries in order to simplify following the process. These new queries appear under the Security Queries folder in Team Explorer and include:

  • Active Security Bugs
  • My Security Bugs
  • Resolved Security Bugs
  • Open SDL Tasks
  • My SDL Tasks
  • Open Exceptions (includes both tasks and bugs, and is especially useful for security advisors)
  • Approved Exceptions
  • Security Exit Criteria

Most of these queries are self-explanatory, but the Security Exit Criteria query needs a little more explaining. In order to meet their SDL commitment for a given iteration, the team must have completed all of the following activities:

  • All every-sprint, recurring SDL task requirements for that iteration must be complete or have had an exception approved by the team’s security advisor
  • There must be no expired one-time or bucket SDL task requirements
  • All bugs with security implications with Severity of “3 – Moderate” or higher must be closed or have had an exception approved by the team’s security advisor

The terms every-sprint, one-time and bucket in this context refer to the SDL-Agile concept of organizing requirements based on the frequency with which they must be completed. Every-sprint requirements are recurring requirements and must be completed in every iteration. One-time requirements are non-recurring and only need to be completed once. Bucket requirements are recurring requirements, but only need to be completed once every six months.

A detailed discussion of this classification system is beyond the scope of this article; but if you’d like to understand more about this system, please read the MSDN Magazine article “Agile SDL: Streamline Security Practices for Agile Development” from the November 2008 issue.

The intent of the Security Exit Criteria query is to provide team members with an easy way to check how much more work they have left in order to complete their SDL commitment. If you configure a SharePoint site for your MSF-Agile+SDL team project when you create it (normally this is done for you automatically), you’ll also see the Security Exit Criteria query results on the team project’s Security Dashboard.

The new Security Dashboard is available only for MSF-Agile+SDL projects (see Figure 1). By default, it includes the Security Exit Criteria, Open SDL Tasks, Open Exceptions, and Security Bugs queries, but these can be customized if you like. The Security Dashboard is also set as the default project portal page for all MSF-Agile+SDL projects, but if you’d like to change to a different default dashboard, simply open the Dashboards document library, select the dashboard you want to use, and choose the “Set as Default Page” option.

Figure 1 MSF-Agile+SDL Security Dashboard

Figure 1 MSF-Agile+SDL Security Dashboard

Criteria, Open SDL Tasks, Open Exceptions, and Security Bugs queries, but these can be customized if you like. The Security Dashboard is also set as the default project portal page for all MSF-Agile+SDL projects, but if you’d like to change to a different default dashboard, simply open the Dashboards document library, select the dashboard you want to use, and choose the “Set as Default Page” option.

Check-in Policies

The final feature of the MSF-Agile+SDL process template is the set of SDL check-in policies. These policies help prevent developers from checking in code that violates certain SDL requirements and could therefore lead to security vulnerabilities. The SDL check-in policies available are shown in Figure 2.

Figure 2 MSF-Agile+SDL Check-in Policies

SDL Check-in Policy Description
SDL Banned APIs Ensures that the compiler option to treat warning C4996 (use of a deprecated function) is treated as an error. Because most of the runtime library functions that can potentially lead to buffer overruns (for example, strcpy, strncpy and gets) have been deprecated in favor of more secure alternatives (strcpy_s, strncpy_s and gets_s, respectively), use of this check-in policy can significantly improve the application’s resistance to buffer overrun attacks.
SDL Buffer Security Check Ensures that the compiler option Enable Buffer Security Check (/GS) is enabled. This option reorganizes the stack of the compiled program to include a security cookie or canary value that greatly increases the difficulty for an attacker to write a reliable exploit for any stack overflow vulnerability.
SDL DEP and ASLR Ensures that the linker options Data Execution Prevention (/NXCOMPAT) and Randomized Base Address (/DYNAMICBASE) are enabled. These options randomize the address at which the application is loaded into memory and help to prevent code from executing in memory intended to be allocated as data. Especially when used in combination, these two options are strong defense-in-depth measures against buffer overrun attacks.
SDL Safe Exception Handlers Ensures that the linker option /SAFESEH is enabled. This option helps prevent attackers from defining malicious exception handlers, which could lead to a compromise of the system. /SAFESEH creates a table of legitimate exception handlers at link time, and will not allow other exception handlers to run.
SDL Uninitialized Variables Ensures that the compiler warning level is set at level 4 (/W4), the strictest level. Use of this option will flag code where variables may have been used without being initialized, which can lead to potential exploits.

It’s simple to enable any or all of the SDL check-in policies. From Team Explorer, right-click a Team Project and select the Source Control option from the context menu. Choose the Check-in Policy tab and add the SDL policies you want to enforce (see Figure 3). It’s important to note that check-in policy enforcement is performed on the client machine, not on the TFS server, so you’ll need to install the SDL check-in policies on each developer’s machine.

Figure 3 Adding Check-in Policies

Figure 3 Adding Check-in Policies

Wrapping Up

For any secure development methodology to be effective, it has to be easy to automate and easy to manage. The MSF-Agile+SDL process template helps significantly with both of these requirements. If you’re already using the MSF-Agile process template that ships with TFS, you already know how to use the MSF-Agile+SDL template—it’s a strict superset of the MSF-Agile template you’re already familiar with. Download it from microsoft.com/sdl and start creating more secure and more privacy-aware products today.


Bryan Sullivan is a security program manager for the Microsoft Security Development Lifecycle team, where he specializes in Web application and .NET security issues. He’s the author of “Ajax Security” (Addison-Wesley, 2007).

Thanks to the following technical expert for reviewing this article: Michael Howard