This documentation is archived and is not being maintained.

PCI-DSS Compliance with Visual Studio 2010

Visual Studio 2010

Northwest Cadence

July 2011

Payment Card Industry Data Security Standard (PCI-DSS) is a voluntary standard to which not only all merchants that accept credit and debit cards but also the companies that process those transactions are increasingly being held. This paper discusses how your team can use Visual Studio 2010 and Visual Studio Team Foundation Server 2010 to help meet certain aspects of this standard by following secure development practices, managing changes and test cases, and providing full traceability for your development efforts.

Visual Studio 2010

Team Foundation Server 2010

Brief Overview of the PCI-DSS

Introduction to PCI-DSS Compliance

Relationship between PCI-DSS and PA-DSS

Testing is the Focus of Certification

Using Visual Studio 2010 and Team Foundation Server 2010 to Achieve Compliance

Secure Software Development Practices

End-to-End Traceability

Change Control

Visual Studio 2010 Tools to Support PCI Compliance

Post-mortem compromise analysis has shown common security weaknesses that are addressed by PCI DSS, but were not in place in the organizations when the compromises occurred. PCI DSS was designed and includes detailed requirements for exactly this reason—to minimize the chance of compromise and the effects if a compromise does occur. -- PCI-DSS Self-Assessment Questionnaire

An increasing number of news articles demonstrate the need for security standards such as the PCI-DSS by highlighting security breaches and loss of credit-card information.

A security breach generally has far-reaching consequences for affected organizations. These consequences can include penalties that regulators impose, litigation, and increased regulatory scrutiny. However, the penalties that the marketplace inflicts are often the most severe: loss of customers, reputation, and even stock value. Because of such penalties, a single breach can cripple a company and force it to bankruptcy.

Analysis of past compromises shows that several common weaknesses form the root of most compromises. The PCI-DSS provides specific recommendations and requirements that help prevent these issues from occurring in any PCI-DSS certified organization and reduce the severity of loss if a breach occurs.

Visa, Master Card, American Express, and other payment card issuers defined the PCI-DSS as a worldwide standard to prevent fraud and reduce the possibility of unplanned exposure of credit-card information. In general, this standard applies to all organizations that process information from any payment card that is branded by one of the issuers who were involved creating in the standard.

Organizations who want to prove that they comply with the standard must undergo a formal assessment. Depending on how many transactions the organization processes, this assessment is done either by the organization itself or by an external Qualified Security Assessor (QSA). To perform an internal assessment, an organization must complete a PCI-DSS Self-Assessment Questionnaire (SAQ). QSAs perform external assessments as formal audits, and, in many cases, use automated tools to probe the organization’s applications and networks for weaknesses. Even though compliance with the PCI-DSS is not enforced by a governmental body or even the PCI Security Standards Council, organizations that fail to comply can lose their ability to process credit-card payments or face financial penalties that the credit-card companies themselves can levy.

What this Whitepaper Covers

The PCI Security Standards Council offers several supporting documents to help companies enhance their payment-card security and prepare for an assessment. For more information, see

This paper will focus on the key areas of compliance that affect software development. The PCI-DSS covers substantially more areas, which include security for wireless and wired networks, hardware, databases, and file systems. (Organizations must meet requirements in 12 areas.) The software development requirements are broad and include third-party penetration testing and other actions that require full organization support. Thus, we will restrict our coverage to those areas that products in Visual Studio 2010 support, including the client tools in Visual Studio 2010 and the server support that Team Foundation Server 2010 provides. Specifically, we will look at secure development practices, end-to-end traceability, change management, and tools in Visual Studio 2010 that support compliance. In addition, we will include a brief overview of the Security Development Lifecycle process template for Team Foundation Server 2010. This template helps integrate security best practices into the software development process.

Complying with any set of guidelines or regulations can be difficult. Your team can dramatically reduce this difficulty by using automated systems and tools that enable a defined process, provide an end-to-end audit trail, and report on full traceability between components of the system. Team Foundation Server 2010 provides those tools.

The good news is that the PCI Standards Security Council does not specify a particular software development methodology. Instead, the council relies primarily on testing and auditability for compliance. This strategy invites us to consider the use of Agile or Lean methodologies, in addition to more traditional approaches, to deliver validated software.

Further good news is that the council provides a series of concrete, specified self-assessment tests to ensure that development teams know the standards to which they are being held. These tests are provided in the PCI-DSS Self-Assessment Questionnaires.

To prove PCI compliance, organizations must meet several standards, each of which has associated guidance documentation. For instance, hardware vendors have a different set of guidance and validation rules than software vendors do, and both types of vendors must meet different requirements from merchants and service providers. From a software development perspective, you should be aware of two main areas of guidance: PCI Data Security Standard (DSS) and PCI Payment Application Data Security Standard (PA-DSS).

Figure 1: Payment Card Industry Security Standards

Payment Card Industry Security Standards Graphic

Relationship between PCI-DSS and PA-DSS

“A PA-DSS compliant payment application alone is no guarantee of PCI DSS compliance.” -- PCI PA-DSS Requirements and Security Assessment Procedures v1.2.1

The PCI-DSS is the umbrella requirement that organizations that process credit-card transactions must meet. To simplify compliance, these organizations often use software or services that they license from other companies. To ensure that licensed software meets high standards, the providers of this software must meet the PA-DSS requirements. These requirements do not replace the need for a PCI-DSS assessment. Regardless of other assessments, merchants and service providers must comply with the more comprehensive standards in the PCI-DSS.

In “PCI DSS – Requirements and Security Assessment Procedures V2.0,” the PCI Security Standards Council offers the following explanation:

“The PA-DSS applies to software vendors and others who develop payment applications that store, process, or transmit cardholder data as part of authorization or settlement, where these payment applications are sold, distributed, or licensed to third parties. Please note the following regarding PA-DSS applicability:
- PA-DSS does apply to payment applications that are typically sold and installed “off the shelf” without much customization by software vendors.
- PA-DSS does not apply to payment applications developed by merchants and service providers if used only in-house (not sold, distributed, or licensed to a third party), since this in-house developed payment application would be covered as part of the merchant’s or service provider’s normal PCI DSS compliance.”

You should understand one more caveat. If a company builds an application and sells it to only one external organization (such as a custom solution), a separate PA-DSS is not required. In this sort of case, the application “would be covered as part of the merchant’s or service provider’s normal PCI DSS”.

Even companies that create internal software only and do not sell PCI-compliant software can benefit from understanding both. Thus, this whitepaper will examine practices for complying with both PCI-DSS and PCI PA-DSS.

Testing is the Focus of Certification

Compliance with the revolves around testing. Each of the 12 requirements is broken down into sub-requirements, which are each associated with one or many questions or test cases. This structure is especially true of organizations that are conducting a self-assessment. Unlike many other certifications, the PCI-DSS is driven by clear expectations and has clearly demarcated standards that organizations must meet for compliance. However, compliance is not simply a matter of checking off a set of check boxes. The process can be difficult, but it is relatively clear.

To pass an internal or external audit, you must have end-to-end traceability throughout the entire development process, including tracking changes, validating bug fixes, and performing code reviews. All of this activity requires documentation, and you will probably need a lot of it. However, you shouldn’t confuse documentation with paperwork. Paper may be an important part of your certification plan, but it need not include the exhausting piles of busywork that many organizations create when they seek compliance. Instead, your team can prove compliance by combining relatively simple paperwork requirements with deep, thorough traceability throughout the development lifecycle. You can capture this traceability in Team Foundation Server 2010 and create reports in a variety of formats that include if necessary, paper.

The PCI-DSS and PA-DSS requirements are very broad and cover everything from hardware, network configuration, and personnel training, in addition to software development. However, Visual Studio 2010 and Team Foundation Server 2010 provide a very valuable foundation to track your organization's adherence to the PCI principles and to prove compliance with its requirements.

Visual Studio 2010 and Team Foundation Server 2010 were not designed specifically with PCI compliance in mind, but they were designed to support fundamental good practices, end-to-end traceability, and auditability. One of the fundamental requirements of any application lifecycle management (ALM) tool that supports regulatory compliance is full traceability and auditability, and Team Foundation Server was designed to help teams meet those needs.

To understand how Visual Studio 2010 and Team Foundation Server 2010 can help your team comply with PCI requirements, including PCI-DSS and PA-DSS, this paper will examine sections of the PCI-DSS Self-Assessment Questionnaires, the PA-DSS documentation, and relevant sections of other PCI documents.

The Self-Assessment Questionnaire documents contain hundreds of questions, so this white paper couldn't possibly discuss each of them in turn. Instead, this white paper focuses on covering the few key concepts that the questions about software development explore.

Secure Software Development Practices

“Obtain and examine written software development processes to verify that they are based on industry standards, that security is included throughout the lifecycle, and that software applications are developed in accordance with PCI DSS.” -- PA-DSS 5.1

Software development practices provide the foundation on which software is built. Poor development practices often result in poor software. By following a solid set of practices and using appropriate tools to enable those practices, your team can build good software more easily and reliably.

Although organizations undergo a series of tests to prove compliance with PCI, the PCI Security Standards Council is clear that organizations must focus their software development process on security. Processes that focus on security build it into the application from the start and produce software that is far more secure. Organizations that try to drive secure code through testing rarely succeed. Teams that rely solely on testing to discover security flaws ship software that contains substantially more bugs. In addition, these teams waste a lot of time rewriting code to respond to security and quality issues that testers find late in the development cycle.

Process templates in Visual Studio 2010

Visual Studio 2010 provides several built-in tools that enable good practices in software development. A process template enables each product development effort in Team Foundation Server. These templates specify how teams create and track requirements, test cases, bugs, tasks, risks, change requests, and other artifacts. Templates also include several default reports, template documents, and documentation. Through tight integration with Visual Studio 2010, a process template can also enforce a wide range of behaviors, which include requiring formal sign-off during various workflows, tracking whether required tests have been run, and enforcing links between code and the requirements that dictate that code should be written. Finally, each process template provides a set of guidance that your team can configure to specify how it should conduct the development process.

These process templates are easy to customize, but many organizations that seek to prove PCI compliance find that the MSF-Agile + Secure Development Lifecycle template fulfills their needs with little or no customization.

MSF-Agile + Secure Development Lifecycle v5.0 Process Template

“Simply put, the Microsoft SDL is a collection of mandatory security activities, presented in the order they should occur … …practical experience at Microsoft has shown that security activities executed as part of a software development process lead to greater security gains than activities implemented piecemeal or in an ad-hoc fashion.” - Simplified Implementation of the SDL, Microsoft

The Security Development Lifecycle (SDL) is a security assurance process that is focused on software development. Since 2004, the SDL has been a key security initiative at Microsoft that seeks to reduce the number and severity of security vulnerabilities though both guidance and practical tools.

The Microsoft SDL is based on three core concepts—education, continuous process improvement, and accountability. Although the SDL is implemented in phases, organizations often use it iteratively to support Agile software development. The following figure identifies the various phases of the SDL and the requirements at each phase.

Figure 2: Secure Development Lifecycle phases

Secure Development Lifecycle phases grpahic

The SDL focuses on security and privacy throughout all phases of the development process and, therefore, integrates well with Team Foundation Server 2010. Your team can use Team Foundation Server to enable the process by simplifying adoption and providing traceability into the lifecycle activities. To integrate with Team Foundation Server 2010, your team needs a process template.

You can download the MSF-Agile + Secure Development Lifecycle v5.0 Process Template (MSF-Agile + SDL) from Microsoft. Teams use this template to help implement the SDL.

Figure 3 - Features of the MSF-Agile + SDL Template

Features of the MSF-Agile + SDL Template

The MSF-Agile + SDL template is more than a simple process template. In addition to including more work items and workflows that relate to security, the template also includes a service that responds to common development activities. For instance, when a team creates an iteration, the service creates security work items that the team must complete at each iteration. The service also adds security-related tasks whenever a team adds Visual Studio projects to version control. The service determines whether the new project is based on C/C++ or .NET managed code and then creates tasks that relate to that type of development. For example, C/C++ projects trigger the creation of work items to ensure that buffer overflow compiler settings are set. The MSF-Agile + SDL also includes five check-in policies that enforce good code practices. For example, these policies restrict check-in of code that contains known insecure API calls, check for buffer overrun vulnerabilities, and ensure the creation of secure exception handles. A template also helps you begin using the SDL Threat Modeling Tool. Finally, additional components of SharePoint Products provide dashboards that highlight security status.

The MSF-Agile + SDL template includes two types of work items that relate to security: the SDL task and the SDL task archetype. SDL task archetypes define the SDL tasks that the SDL-Agile Controller Service creates at specific points in the software development process. The process template ships with 37 SDL task archetypes, but your team can easily extend the template by adding another archetype work item. Your team can use work items that are based on the SDL task to manage the development lifecycle. For example, your team can use security-related tasks that are critical to reliably developing secure software. Teams must complete SDL tasks at appropriate phases and cannot defer these tasks without requesting an exception. To request an exception, the team must complete several fields to justify the request, and the SDL task is moved to an Exception Requested state. When the team moves the task from the Exception Requested state to either Active or Closed, the change is tracked and auditable.

In short, the MSF-Agile + SDL template provides a substantial amount of tooling around developing secure software. The template provides the base infrastructure to track the entire development process and includes built-in tasks and workflow to ensure that the practices are tracked. For more information about the Microsoft Secure Development Lifecycle, see

Additional Development Tools in Visual Studio 2010

In addition to process templates, Visual Studio 2010 provides many integrated tools that your team can use during the development effort. Out-of-the-box tools include architectural, development, and testing tools; automated build infrastructures; integrated version control; and reporting engines. Microsoft provides many other tools that you can download separately and integrate with Visual Studio. These tools include the SDL Threat Modeling Tool, Anti-Cross Site Scripting Library, banned.h, Minifuzz File Fuzzer, Attack Surface Analyzer, and App Verifier. To effectively use these tools to prove compliance, you should set them up before the development effort starts and train the development team how to use them.

Put Key Practices in Place Early

You should implement a few key practices early. For example, your team should set up the version control structure, with empty solutions and projects that are ready for development. This step will enable each solution to build automatically. Your testers should create test lists, which will contain the automated tests, and add them to the automated build. Your team should define branching patterns and identify code promotion paths. Your team should set up alerts to notify people when deviations from the process occur, and your team should identify reports to track overall quality, progress, and risk. Your team should review process guidance to ensure that it matches the needed workflow. Finally, your team should understand not only the objectives of the software development effort but also the tests that will be required to prove compliance.

End-to-End Traceability

PCI compliance strongly emphasizes good development practices. Although traceability is rarely mentioned directly, it is nearly impossible to show adherence to the appropriate secure practices without the ability to visualize and understand how the development activity has progressed. Thus traceability provides a very important tool to proving PCI compliance. This traceability must flow through the entire lifecycle.

The foundational element that supports end-to-end traceability is the requirement. It might be called a feature, a user story, or a deliverable, but it always defines what the team must deliver to the customer. In addition to the customer-facing requirements that define the direct business value of your software, teams that seek PCI compliance should also focus on additional software requirements that specify the expected security attributes of the application.

Because requirements often change as a team develops software, all changes to those requirements should be tracked. This diligence provides insight into potential security risks: requirements that change late in the process often risk introducing security flaws.

Traceability must then flow from requirements through to other artifacts, such as risks, test runs, and bugs. Traceability between requirements, test cases, and test runs is especially important for proving compliance with the PCI requirements for testing. Relationships between risks and requirements also highlight potential security areas that your team must manage. Not only are other forms of traceability encouraged, but they also, in our experience, directly and positively affect the assessment process.

Requirements Provide the Foundation

Team Foundation Server 2010 provides end-to-end visibility into the entire application lifecycle. Not only does it capture the traceability data effectively, but also it provides several visualizations and reports that allow the requirements and all associated artifacts to be conveniently audited.

Figure 4 – Work items have high-fidelity relationships to other work items

Work Item Relationships

Traceability between Requirements, Test Cases, and Bugs

By using Visual Studio 2010, you can implement complete traceability across the entire software development lifecycle. The traceability between requirements, test cases, test results, and bugs is particularly important for testing. By utilizing the built-in testing tools, developers and testers can effectively collaborate on linking test cases to requirements, finding and fixing bugs, and improving the quality of the code. These day-to-day activities result in data that your team can track in Team Foundation Server 2010 and display in both ad-hoc and built-in reports.

In the following figure, a report shows the quality of each individual requirement in terms of the number of active test cases, the most recent status of test runs, and any bugs that are currently logged against it. In addition, the report shows how much work remains to complete the coding, testing, and deployment of this requirement. This built-in report powerfully highlights the inherent traceability that Visual Studio 2010 offers.

Figure 5 - Overview reports highlight test run status and bugs

Overview reports highlight test run status and bug

Version Control is a Key Player

Team Foundation Server 2010 includes version control for the enterprise, which ensures that all changes to source code are tracked and auditable. Unlike some version-control systems, Team Foundation Server 2010 is built to ensure that developers cannot manipulate their check-ins after they have been committed and built. This security provides auditors confidence in the traceability. In addition, several advanced features ensure not only that traceability is auditable but also that it provides deep information. For example, version control in Team Foundation Server 2010 groups related code changes into “changesets,” and each changeset generally represents a measurable change to the system, such as a bug fix. These changesets are then linked to the work items for which the code was changed. In the following changeset, the change was checked in to resolve a specific bug. You can display the bug, its detail, and a direct link to the bug from the Work Items tab.

Figure 6: Code changes are tracked and auditable

Code changes are tracked and auditable

Automated Build System Ties Together Other Items into a Build

Team Foundation Server 2010 includes a comprehensive automated build system called Team Foundation Build. By using team Foundation Build, your team can track all builds, including those that are approved for testing and release. Because version control is tightly integrated with work items, a substantial amount of traceability information is embedded in each build. For example, each build report identifies which code changes (and their associated work items) were compiled in the build. By using tools in Visual Studio 2010, your team can determine exactly what code changes the team made between any two builds and what work items the team worked on during that time. This capability provides a lot of information that is useful to anyone who is assessing the development process or investigating the differences between releases.

Figure 7: Build reports highlight associated code changes, work items, and manual and automated tests that the build affects

Build Reports show code changes & affected tests

End-to-End Traceability Can Be Visualized

Team Foundation Server 2010 collects data from many artifacts and can provide a comprehensive, end-to-end view into the development lifecycle. You can trace requirements down to the lines of code that the team added, edited, or deleted to implement them. Build reports detail which bugs the team fixed, what requirements the team worked on, what code the team changed, and which automated tests the team ran against any particular build. Test run reports highlight the quality of requirements, the number of executed tests and their status, and even which tests were affected by recent builds.

The following figure shows a simple example of how you can visualize the traceability from a single bug. A requirement, a change request, and a test case are directly related to the bug, and you can trace progress from the requirement to the code. One thing to note is that no code has been committed in response to this bug. As discussed earlier, your team can provide full traceability either through a direct view of work items or through a network visualization of relationships.

Figure 8: Links between work items show traceability all the way to code

Links between work items giver full traceability

Change Control

“Follow change control processes and procedures for all changes to system components.” - PCI DSS, Section 6.4

Change control is closely related to traceability. Traceability provides the foundation to allow visibility into the entire process. However, change control is a key area that should be discussed separately. Even with full traceability, if the process does not support effective change management, it would be difficult to enable effective and secure software development practices.

Change control is often associated primarily with version control tools, possibly combined with a build system, and a way to ensure that teams can effectively track any changes to requirements. Team Foundation Server 2010 takes change control further. In addition to providing a full suite of tools that help teams audit the entire lifecycle, Team Foundation Server 2010 provides specific features that alert teams to the results of system changes.

Consider a development team that is nearing the first release of a software system that complies with PCI. Management approves a change to an existing requirement. Before the code is written, the team must assess the risks. Developers trace the existing requirement all the way down to the code files that they changed to implement the requirement. The process is then reversed to show all other requirements that touched the same code files. This strategy highlights the dependencies between the requirements and provides an initial take on the complexity of the change, the breadth of the impact, and the tests that may need to be rerun. In addition, the assessment may uncover dependencies between the requirement to be changed and key areas of the application, such as data encryption modules, that deal specifically with PCI requirements. After this initial assessment, the team may decide to implement the change. As developers write code and commit it to version control, additional builds are created and released to testing. Test impact analysis, a feature of Visual Studio 2010, then alerts the test team to any tests, even manual ones, that must be rerun based on changes to the underlying code. Also, Team Foundation Server 2010 can identify which items were worked on between any two builds. This feature helps the team identify all work, not just the change request, that has been accomplished. As the development team integrates the changes into the final build, the test team can confirm that the changes were effectively tested, the change request has been completed, and the system is ready for release.

Work Item Changes are Auditable

Teams should not only create solid requirements but also track all changes to those requirements for audit purposes. Team Foundation Server 2010 helps teams track all changes to all work items, including requirements. At any time, you can easily understand the history of a requirement or other work item. In addition to viewing the complete change log on the History tab, you can perform an ‘as of’ query that will return the exact value of a field in a requirement at any time in the past.

Figure 9: Work item history is built into Team Foundation Server 2010

Work Item History in Team Foundation Server

Work Items Track Relationships to Other Work Items

Because Visual Studio 2010 tracks relationships between work items, you can easily visualize and track the changes to a requirement or the impact of a bug. Figure 3 highlights the various links that work items can have to each other. Your team can create these links to easily identify and track all work items that affect a requirement, such as change requests, test cases, child tasks, or related bugs. Because you can trace each work item, you can audit the impact of change requests or bug fixes. In addition, you can run simple reports to reveal where any change request was implemented without authorization, risk analysis, or comprehensive testing.

Figure 10: Work item relationships are easily discovered

Work Item relationships are easily discovered

Test Impact Analysis

“For each sampled change, verify that functionality testing is performed to verify that the change does not adversely impact the security of the system.”- PCI DSS,

Because Team Foundation Server 2010 provides deep traceability, your team can easily discover the test impact of code changes. Visual Studio 2010 tracks the code that is running during both manual and automated test runs and correlates this information with the version of the code that is being tested. When the team applies code changes to fix bugs, to implement change requests, or for other reasons, Visual Studio 2010 can analyze the changes and, based on the data from the previous test run, determine which test cases the changes affected. This information helps testers quickly identify the minimal set of regression tests that they must run to ensure that functionality remains intact after an upgrade.

Figure 11: Test impact analysis automatically identifies the minimal set of required regression tests

Test Impact Analysis recommends tests to run

Work Item Activity between Builds

In addition to providing test impact analysis, Visual Studio 2010 allows your team to report on the work item activity between any two builds. By using this information, auditors can determine what features the team added and what bugs the team fixed in each maintenance release. This transparency is especially useful for identifying changes that may affect the overall security profile of the application.

Figure 12: Development activity is automatically traced between any two builds or releases

Activity is traced between builds or releases

Tools in Visual Studio 2010 to Support PCI Compliance

Visual Studio 2010 introduces several tools that can help your team prove PCI compliance. Some of these tools, such as static code analysis, directly relate to specific requirements of the PCI-DSS. Others, such as test case management, do not map directly but are critical to supporting the compliance effort, either by improving traceability or by helping teams implement the ‘good programming practices’ that compliance requires.

The following sections present the tools roughly in the order in which they are used in a traditional lifecycle of software development. However, their use is not limited to that sequence. In fact, in any iterative development process, teams will repeatedly work with each of these tools throughout the entire lifecycle.

Software Design

The design of any system is one of the most difficult tasks for any software development effort. The design task becomes even more demanding when a system must be PCI-compliant and will store financial information that is highly confidential.

One reason for a detailed design specification is to constrain the programmer within the intent of the requirements and design. This constraint reduces the need for ad-hoc design decisions. As the design evolves during the development process, these changes must be brought to the developer’s attention. In addition, any changes brought about by coding must be revalidated against the design. Most security flaws occur in the interactions between components of a system.

Support for the Unified Modeling Language

Your team can create UML diagrams directly in Visual Studio 2010, which provides built-in support for class, sequence, use case, activity, and component diagrams. These diagrams, along with other architectural artifacts, support the creation of a complete design specification. More importantly for validation, however, your team can link these artifacts directly to the requirements that they support. Changes to requirements can then also point to possible changes to the architecture.

Figure 13: Your team can easily associate UML diagrams with work items

UML diagrams can be associated with Work Items

Layer diagrams

A good architecture helps point to possible threats and helps the team create an effective threat model. The team can then use this threat model to review attack surfaces, identify potential security risks, and outline any risk mitigations.

The problem is that threat models depend on the construction of the code aligning to the architectural design of the system. If developers unintentionally violate the design by creating interactions between components that should be isolated, threat models lose some validity. Thus, it is absolutely critical that teams regularly validate existing code against the architecture. In the past, this validation has been tremendously difficult to perform at any level of detail. In fact, few assessments could go into the level of detail that would provide solid evidence from code to architectural entity. Your team can avoid this common pitfall by using the layer diagram in Visual Studio 2010.

The layer diagram shows the high-level architecture of a system and ensures that the code, as it evolves, stays consistent with the design. This diagram organizes code artifacts into logical groups, which are called layers, and describes not only the major components of a system but also the interactions between them. These interactions, which are generally dependencies, are represented by arrows that connect any two layers. You can enforce architectural constraints on the code by creating a layer diagram, linking code entities to layers, and specifying the interactions of the layers. These constraints can be validated on demand, when code is checked in, or even during the nightly build.

Layer diagrams help make code easier to understand, update, reuse, and maintain, and they ensure that the team does not violate architectural designs as time passes and the code base changes. This clarity is critical for proving that system changes did not reach across architected boundaries and cause unintended consequences.

To simplify the validation process, your team can validate the layer diagram for a solution during the nightly build. This approach quickly identifies any code changes that violate the architecture and helps ensure that the team takes appropriate action.

Figure 14: Layer diagrams not only display the architecture of the system but also enforce it

Layer Diagrams enforce the system architecture

Architecture Explorer

When requirements change, architectures evolve, or changes are made that may affect the system, it is critical to understand the impact on the overall system. Historically, this task has been difficult. Visual Studio 2010 introduced Architecture Explorer, a tool that developers, testers, architects and others can use to quickly explore the existing architecture of an application. By using Architecture Explorer, users can quickly identify dependencies between any two pieces of code, identify circular references that may cause instability, visualize all incoming calls to a class or method, and perform many other tasks. By providing this level of detail and the ability to drill from high-level architectural understanding to individual pieces of code, Architecture Explorer enables much of the traceability from design to code that is so important for compliance.

Figure 15: Team members can use Architecture Explorer to rapidly explore dependencies in the code

Architecture Explorer and code dependencies

Construction or Coding

Much of the coding effort remains outside of the requirements for validation. For instance, the PCI-DSS has very little to say about the computer language, the unit test framework, and the framework libraries that teams use. Instead, the standard focuses on a few measures that are closely correlated with code quality, such as standard coding conventions, code complexity, and adherence to the design specification.

Another development task that teams typically undertake during coding is unit testing. Neither the PCI-DSS nor the PA-DSS specifies that automated unit tests should be used, but modern development practices generally require unit testing. This element is especially important for compliance because it supports “code coverage” calculations, which determine how much code the automated unit tests actually exercise.

Code Metrics

Visual Studio 2010 includes several standard code metrics that your team can use to understand the complexity and maintainability of the underlying code. By using these metrics, developers, testers, architects, and even auditors can understand which parts of the code should be refactored or more thoroughly tested. Team members can also use this information to help identify areas of highest risk, because complexity is often inversely correlated with quality and maintainability.

Visual Studio automatically calculates the following measures:

  • Maintainability Index – an aggregate measure that highlights the overall maintainability of the code

  • Cyclomatic Complexity – a measure of the structural complexity of the code

  • Depth of Inheritance – the number of class-level inheritances in an object-oriented design

  • Class Coupling – the number of dependencies between this class and all other classes in the system

  • Lines of Code – the number of non-comment lines of code in a class or method

Figure 16: Code metrics highlight potential areas of concern

Code Metrics highlight potential areas of concern

Static Code Analysis

The feature for static code analysis in Visual Studio 2010 checks code against several hundred rules for potential errors in areas such as design, naming, reliability, and security. Your team can combine these rules into sets and run only a specific subset of the rules to highlight potential problems. For example, your team can run the “Minimum Recommended Rules," which focus on the most critical problems such as potential security holes, application crashes, and other important logic and design errors. For maximum coverage, your team can run “All Rules,” which contains every available rule. The Microsoft Security Rules set is of particular interest for PCI compliance. Finally, your team can easily configure a custom set of rules to focus on your specific needs, if none of the built-in sets fit your situation.

Figure 17: Preconfigured sets make rules easy to use

Preconfigured Rule Sets make them easy to use

Figure 18: Your team can easily customize sets of rules for code analysis

Code Analysis RuleSets are easy to customize

Automated Unit Testing

Visual Studio 2010 provides a built-in framework that can help your team quickly create and run automated unit tests. Visual Studio also provides an advanced code coverage tool that not only provides numeric insights into how much code the automated unit tests cover but also graphically highlights code that none of the unit tests touched. By using this tool, a developer can quickly identify any untested code and create a unit test that will effectively test its functionality. This tool is especially useful for discovering error-handling code that may not run during typical operation.

Figure 19: Code-coverage metrics are available for all unit-test runs

Code coverage data is available for all unit tests


Testing is critical to PCI compliance. Tests must be documented, complete, and traceable to requirements, and teams must maintain a history of test case results. Both the PCI-DSS and PA-DSS rely on execution of tests to prove compliance. Of particular importance is the ability to create and audit test plans, test cases, and test results in multiple environments with full traceability between software requirements and the test runs that validate their behavior.

Teams must conduct tests in different environments. It is expected that software will be tested throughout the development process, not just near the end. This testing will occur predominantly in the development and test areas while the software is in development. Before release, however, the software is expected to be tested in a representative user site with end users of the application. This phase is critical because users often have different expectations from the technical development staff for how a system should behave. Thus, effective testing must involve representative users of the application.

Testing has one primary goal – to ensure that the application meets all specified requirements. Testing auditability means proving not only that some tests were run but also that the correct tests, linked to each requirement, were run. Teams must also know what the test results were, whether the tests effectively “covered” the requirements, and whether the tests are complete. No tool can ensure that a method, component, requirement, or a full application has been fully tested. However, tools can provide visibility into the test cases that verify a requirement. Your team can also use tools to help track all bugs that the team logged against the requirement, the code that the team implemented to resolve those bugs, and the risks that the team identified and associated with any untested functionality. Thus, a good tool can provide the information that you need to make important decisions around how much testing is enough.

Test Case Management

Visual Studio 2010 provides many tools that help teams create and maintain effective manual and automated test cases. Test plans in Visual Studio 2010 are auditable and contain test cases that are grouped by area, linked directly to a requirement, or both. This organization allows complete traceability through all of the testing artifacts, including every test run. In many cases, this automated traceability replaces hundreds of pages of documentation and provides a more reliable audit trail.

Test impact analysis is one of the most powerful features of test cases in Visual Studio 2010. This feature directly ties information about test run coverage to source code. Whenever the underlying source code changes, testers are alerted to which manual and automated tests they must run again to keep the test runs current and valid. For more information, see the section about Test impact analysis earlier in this white paper.

Automatic Tracking of Manual Tests

Through a newly introduced application called the Manual Test Runner (MTR), Visual Studio 2010 ensures that your team can collect data automatically even when they perform manual tests. The MTR tracks data that is involved in manual test runs, including action data. Teams can use that data to run manual tests again in an automated fashion and simplify the repeated running of manual tests. In addition, manual test runs gather data about the system that is being tested, up to and including the code paths that the tests are exercising. All of this data is maintained in Team Foundation Server 2010 and provides a solid foundation for compliance.

One of the more unique features of the MTR is the ability to track end-user testing. Good practices for software development suggest that teams should retain documented evidence of all testing procedures, test input data, and test results. By using the MTR, a test team who is deployed to an end-user site location can ensure not only that the appropriate data is tracked but also, if the team discovers any bugs, sufficient data can be sent back to developers for any remediation.

With the introduction of the MTR and the other testing tools in Visual Studio 2010, Microsoft has provided a solid testing foundation that can help your team meet even the most demanding requirements for auditability and traceability.

Actionable Bugs

Bugs that are created with Visual Studio 2010 provide a lot of information that both developers and auditors can use. For example, a bug that is created during a manual test run automatically has plenty of information attached. This information includes a video of the test run, information about the systems that were being tested (memory utilization, screen resolution, and so forth), event log data that was collected from targeted machines, and even a log that developers can use to very quickly step through the historical execution of the code.

Figure 20: Bugs capture detailed data automatically to support traceability and quick, accurate bug fixes

Bugs data is captured automatically

Automated Test Cases

In addition to supporting full traceability from requirement to test to bug to code, Visual Studio 2010 can help your team automate test cases. Your team can run such cases every night during the nightly build process minimize regressions, and keep the test result data valid against the current code. This capability assists compliance by ensuring that any changes in code that fail tests are caught quickly, when remediation can be accomplished at minimum cost. This capability also maintains a record of successful test runs and highlights the stability of the code that is changing.

Load and Stress Testing

Visual Studio 2010 also provides a powerful load test capability that your team can use to test the application under both realistic and unexpected loads. Your team can use the load test tool to gather data for any failures or inconsistencies that appear when the application is put under load. Your team can also use this tool to determine realistic capacity limits for an application and then incorporate them into the runtime management of the system. Your team can then identify early warnings to possible failure when a load exceeds those limits.

Maintenance and Support

The initial software development process often gets the most attention from business and software development teams alike. However, the maintenance and support phase of an application's lifecycle can be the most critical. Most compliance-related issues are caused by software defects that were introduced when changes were made to the software after its initial production and distribution.

In addition, your team should do more than document which bugs were fixed in each release. Your team should also assess proposed modifications, enhancements, or additions to determine the effect that each change would have on the system. Establishing end-to-end traceability between requirements, change requests, and reported bugs is absolutely critical, as is tracking the impact of the change on the entire system, especially on testing. In fact, teams should analyze the impact of change requests and defects to determine the extent to which validation tasks must be re-run. Thus, regulatory compliance suggests an understanding not only of the system structure and architecture but also how requirements, change requests, and defects affect that system and each other. Traditionally, this understanding has been extraordinarily difficult to achieve reliably. Understanding how requirements, change requests, and defects interact is hard enough. Understanding the relationship between code changes and affected tests has been nearly impossible.

The tools that were described earlier in this white paper, such as full traceability, test impact analysis, and test case management, help teams give maintenance and support the same careful scrutiny as the initial development effort.

Compliance with any regulation requires auditability of the development lifecycle – traceability that identifies the “who, what, when, where, and why” for each change to the system. That process may sound easy, but traceability is generally difficult and labor intensive to create. In many cases, this problem leads to the heavyweight, waterfall practices that we all want to avoid. However, without good tools, most teams must rely on manual documentation and substantial overhead.

Visual Studio 2010 automates much of the required information and links between artifacts. You can then expose all of this data in reports, queries, and forms that highlight the relationships, through time, of the important artifacts that are generated throughout the development lifecycle. Visual Studio 2010 can generate most of this information from the typical day-to-day activity of each team member. Instead of requiring more documentation, Visual Studio 2010 tracks typical activities and correlates them in such a way as to provide deep traceability. For instance, a developer check-in is associated with a work item. The effort is no different than in other systems, but the benefits are staggering. Your team can automatically identify which code was changed in a new release, which bugs were fixed in a build, what tests were affected by change requests, and so much more – all from that one simple action. That capability shows the power of Visual Studio 2010.

Team Foundation Server provides a comprehensive and flexible set of capabilities through traceability, test case management, and automated builds. The rich extensibility of the built-in features gives your team the tools that it needs to prove PCI compliance. If your software development team wants to prove PCI compliance, Team Foundation Server 2010 is an excellent tool on which to base your secure development practices.

Northwest Cadence is a national leader in Microsoft ALM and software lifecycle solutions. Recognized by Microsoft as a Gold ALM Partner, Northwest Cadence focuses exclusively on application lifecycle management with clients across the globe. With experience providing consulting services on Microsoft ALM tools that date back to product inception (Visual Studio 2005 Team System), Northwest Cadence has actively worked with clients and Microsoft on product development, implementation, and process incorporation. Northwest Cadence has a solid commitment to honesty and excellence. This commitment, coupled with vast experience, means that Northwest Cadence clients know to expect an exceptional experience every time.