Phase 4: Verification for LOB
After successful completion of the previous phase, the internal review portion of the Implementation phase, expert application security subject-matter experts are engaged. This phase verifies that an application being deployed into production environments has been developed in a way that adheres to internal security policies and follows industry best practice and internal guidance. Also, another objective is to identify any residual risks not mitigated by application teams. The assessments conducted during the Verification phase are typically conducted by a security or privacy subject-matter expert.
On This Page
An ideal comprehensive assessment includes a mix of both white and black box testing. There is a tendency to prefer black box testing because “it's what the hackers do.” However, it is also more time consuming and can have mixed results. In addition, it is difficult for individuals who are only “part-time” penetration testers to develop the skills and expertise needed to efficiently perform a black box test. Identifying multiple instances/class of vulnerability bugs is more easily accomplished in a code review (white box). A code review, though, can make finding business logic vulnerabilities very difficult. Reading the source code for a complex AJAX-based ASP.NET form and actually playing with it can yield vastly different results in terms of issues found.
Further, this phase should be conducted with a mix of manual process and automated tools. Manual reviews may need to be time constrained and focus on high-risk features. Automated tools can reduce overhead, but should not be relied upon exclusively.
The service level assigned to the application at the Risk Assessment phase governs the type of assessment an application receives in this phase. An application that has been assigned a medium or higher rating automatically requires a white-box code review, while applications assigned with a low rating will not.
- Code review (white box)
- Security team is provided access to an application’s source code and documentation to aid them in their assessment activities.
- Complete review using both manual code inspection and security tools, such as static analysis or penetration testing.
- Review threat model. Code reviews are prioritized based on risk ratings identified through threat modeling activities. Components of an application with the highest severity ratings get the highest priority with respect to assigning code review resources, whereas components with low severity ratings are assigned lesser priority.
- Validate tools results. The security expert also validates results from code analysis tools (if applicable), such as CAT.NET to verify that vulnerabilities have been addressed by the development team. In situations where this is not the case, the issue is filed in the bug tracking system.
- If source code is not available or the application is a third-party application, then black box assessment is conducted for that application.
- Code review duration. The duration of a security review is determined by the security SME and is directly related to the amount of code that needs to be reviewed.
- Code review can be conducted manually or by using automated tools to identity categories of vulnerabilities in the code. However, it should be noted that automated tools should supplement a code review and not replace them entirely, due to their limitations.
- SQL injection. Ensure that the SQL queries are parameterized (preferably within a stored procedure) and that any input used in a SQL query is validated.
- Cross-site scripting. Ensure that user controlled data is encoded properly before rendering to the browser. .NET applications can leverage Anti-XSS library for encoding data that is more rigorous than the native .NET encoding.
- Cross-site request forgery. Ensure that the Page.ViewStateUserKey property is set to a unique value that prevents one-click attacks on your application from malicious users.
- Data access. Look for improper storage of database connection strings and proper use of authentication to the database.
- Input/data validation. Look for client-side validation that is not backed by server-side validation, poor validation techniques, and reliance on file names or other insecure mechanisms to make security decisions.
- Authentication. Look for weak passwords, clear-text credentials, overly long sessions, and other common authentication problems.
- Authorization. Look for failure to limit database access, inadequate separation of privileges, and other common authorization problems.
- Sensitive data. Look for mismanagement of sensitive data by disclosing secrets in error messages, code, memory, files, or the network.
- Auditing and logging. Ensure the application is generating logs for sensitive actions and has a process in place for auditing logs file periodically.
- Unsafe code. Pay particularly close attention to any code compiled with the /unsafe switch. This code does not have all of the protection that normal managed code has. Look for potential buffer overflows, array out of bound errors, integer underflow and overflow, and data truncation errors.
- Unmanaged code. In addition to the checks performed for unsafe code, also scan unmanaged code for the use of potentially dangerous APIs, such as strcpy and strcat. For a list of potentially dangerous APIs, see the section “Potentially Dangerous Unmanaged APIs,” in Security Question List: Managed Code (.NET Framework 2.0). Be sure to review any interop calls and the unmanaged code itself to make sure that bad assumptions are not made as execution control passes from managed to unmanaged code.
- Hard-coded secrets. Look for hard-coded secrets in code by looking for variable names, such as "key," "password," "pwd," "secret," "hash," and "salt."
- Poor error handling. Look for functions with missing error handlers or empty catch blocks.
- Web.config. Examine your configuration management settings in the web.config file to make sure that forms authentication tickets are protected adequately, tracking and debugging is turned off, and that the correct algorithms are specified in the machineKey element.
- Code access security. Search for the use of asserts, link demands, and allowPartiallyTrustedCallersAttribute (APTCA).
- Code that uses cryptography. Check for failure to clear secrets and improper use of the cryptography APIs themselves.
- Threading problems. Check for race conditions and deadlocks, especially in static methods and constructors.
- Penetration test (black box)
- This is a flip of a white-box code where the assessment is carried out without access to the application’s source code. This testing is intended to simulate an attacker’s perspective and uses a combination of tools and penetration techniques to find vulnerabilities in the system.
- While this best simulates most malicious hacker scenarios, this approach typically yields the least bugs, both in terms of quality and quantity, but it is the best approach when source code is not available for review.
- Depending upon available resources, this testing can be done internally by your security team or by engaging a third-party security firm as appropriate. Third-party security tools can also help with this requirement. Following are some of the high-level areas to consider in web penetration testing:
- Use HTTP(s) interrogators, such as Fiddler, to capture traffic and to investigate cookies, headers, and hidden fields. Use Request/Response tampering methods to detect error disclosure, cross-site scripting, SQL injection, and other injection attacks. All user- controlled data, such as cookies, headers, form fields, and query strings should be tested by sending in malformed data.
- Check for forceful browsing to verify authorization controls in applications where there are more than two user groups with different access levels.
- Use Network Monitor to identify if sensitive data is being transferred from client to server and to verify if the channel is encrypted or not. This would be more useful in the case of thick client LOB applications.
- Experiment with the high risk portions of the application to ensure that controls described in the code review discussion have been implemented correctly and consistently.
- Deployment review of servers
- Review the deployment of the production servers to ensure adequate hardening. This review focuses on minimizing the attack surface of the server (in terms of running services and applications installed), hardening the operating system (ACLs, accounts, patching, registry hardening, minimal open ports, installing server functionality, such as IIS web-sites on a non-system drive), and hardening functionality, such as IIS and SQL Server.
- Review, if possible, actual production servers or standard images used to build those servers. Failing that, reviewing test servers with the expectation that the issues found are used as a road map by operations to harden the actual production servers.
- The test server environment and the production server should have similar security measures while the team is developing the application. This ensures that the application is not modified to execute on the production server. The security team runs security checks on the server. The application security team can complete this either manually or by using a tool.
- Privacy review
- Review the privacy statements, notification, privacy controls, user categories, data management, and PII management for the application.
- Host security deployment review
- Installing server software (IIS, SQL Server) introduces new attack surfaces that must be hardened as well. Each server deployed, whether Intranet- or extranet-facing, needs to be hardened to both reduce attack surfaces and provide defense in depth. Recommended tool: Attack Surface Analyzer
- Code review (white box)
- Assessment results yielding Critical or Important bugs automatically result in the application being blocked from deploying into production environments until the issues have been addressed or an exception has been granted by the business owner accepting the risk.
- The security bug bar for LOB applications has additional considerations than what is described earlier in this document. Your business needs to establish guidelines for evaluating the risk posed by individual vulnerabilities. This includes a risk rating framework that applies across all applications. The risk rating framework is independent of the risk assigned to the entire application. The sample table below presents a bug bar that accounts for the unique environment of an LOB application, including the risk posed by individual bugs.
Severity Description Critical
- Impact across the enterprise and not just the local LOB application/resources
- Exploitable vulnerability in deployed production application
- Exploitable security issue
- Policy or standards violation
- Affects local application or resources only
- Risk rating = High risk
- Difficult to exploit
- Non-exploitable due to other mitigation
- Risk rating = Medium risk
- Bad practice
- Should not lead to exploit but helpful to attacker exploiting another vulnerability
- Risk rating = minimal risk
- There is a trade-off in proving that a vulnerability is actually exploitable against time constraints in finding bugs. It may not be worthwhile to actually craft explicit exploit/malicious payload. In this case, you can adjust the severity as appropriate, erring on the side of caution.
Identified risks are logged in the bug-tracking system and assigned a severity rating. The output of this phase results in the following:
- Bug reports
- Exception requests for the risk posed by issues that cannot or will not be fixed prior to production
Handling risk. Development teams may file to be exempt from mitigating such identified risks; however, the important thing to note is that an approved exception request does not relieve development teams of the responsibility to mitigate identified risks indefinitely. Rather, an approved exception request grants development teams a time extension with which risks can exist in production environments unmitigated.
In response to exception requests, security teams gather all pertinent data points, such as technical details, business impact description, interim mitigation, and other exception information, and provide a development team’s upper management with these details in the form of an exception form. Upper management can then approve the exception request and accept identified risks for a small period of time, or they reject the exception request and require the business group to mitigate the identified risks. It is important that a specific business owner explicitly assume the risk posed by unmitigated Critical and Important bugs.
Security team tracks all approved exceptions and follows up with the application team after the exception period has expired.
Note: Critical and Important bugs may exist due to a technological or infrastructure limitation that cannot be mitigated in the current release. An exception should be created in to track the issue until such time as the limitation no longer exists.
- Code review information.
- Security tools:
- Web debugging proxy tools, such as Fiddler, allow you to inspect all HTTP(S) traffic, set breakpoints and “tamper” with incoming or outgoing data, build custom requests, and replay recorded requests.
- HTTP passive analysis tools capable of identifying issues related to user-controlled payloads (potential XSS), insecure cookies, and HTTP headers.
- Microsoft Network Monitor or similar tools that allow you to capture and perform a protocol analysis of network traffic.
- Browser plug-ins or standalone tools that allow lightweight tampering before data is placed on the wire are also very useful for web security testing.
- Automated penetration testing tools that crawl publically exposed interfaces (for example, user interfaces and web services) probing for known/common classes of vulnerabilities and known/published exploits.
- Automated static code analysis tools that parse the syntax of your source code to identify suspected and known vulnerabilities, such as cross-site scripting and code injection.
- Deployment Review Index.
- Privacy Guidelines for Developing Software Products and Services.
This documentation is not an exhaustive reference on the SDL process as practiced at Microsoft. Additional assurance work may be performed by product teams (but not necessarily documented) at their discretion. As a result, this example should not be considered as the exact process that Microsoft follows to secure all products.
This documentation is provided “as-is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it.
This documentation does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.
© 2012 Microsoft Corporation. All rights reserved.