|Important||This document may not represent best practices for current development, links to downloads and other resources may no longer be valid. Current recommended version can be found here. ArchiveDisclaimer|
Guidelines for Writing Secure Code
The following guidelines provide several techniques for writing secure code.
Use code analysis tools.
Visual Studio Team System Development Edition ships with code analysis tools that can greatly increase the possibility of finding security bugs in your code. These tools find bugs more efficiently and with less effort. For more information, see Detecting and Correcting C/C++ Code Defects and Detecting and Correcting Managed Code Defects.
Conduct a security review.
The goal of every security review is either to enhance the security of products that have already been released — through patches and fixes — or to ensure that no new products ship until they are as secure as possible.
Do not randomly review code. Prepare in advance for the security review, and begin by carefully creating a threat model. If you do not, you can waste a lot of your team's time. Prioritize code that should receive the heaviest security review and which security bugs should be addressed.
Be specific about what to look for in a security review. When you look for specific problems, you usually find them. Look even harder if your team is finding a lot of security bugs in one area; it probably indicates an architecture issue that must be fixed. If you are finding no security bugs, that usually means that the security review was not performed correctly.
Hold security review as part of stabilization for each milestone, and a larger product-line push set by the management.
Use a code review checklist for security.
Regardless of your role in the software development team, it is useful to have a checklist to follow to ensure that the design and code meet a minimal bar.
Validate all user input.
If you allow your application to accept user input, either directly or indirectly, you must validate the input before using it. Malicious users will try to make your application fail by tweaking the input to represent invalid data. The first rule of user input is: All input is bad until proven otherwise.
Be careful when you use regular expressions to validate user input. For complex expressions like e-mail addresses, it is easy to think that you are doing complete validation when you are not. Have peers review all regular expressions.
Validate strongly all parameters of exported application programming interfaces (APIs).
Ensure that all parameters of exported APIs are valid. This includes input that looks consistent but is beyond the accepted range of values, such as enormous buffer sizes. Do not use asserts to check the parameters for exported APIs because asserts will be removed in the release build.
Use the Windows cryptographic APIs.
Instead of writing your own cryptographic software, use the Microsoft cryptographic API that is already available. By using cryptographic API from Microsoft, developers are free to concentrate on building applications. Remember, encryption solves a small set of problems very well and is frequently used in ways that it was never designed for. For more information, see Cryptography Overview in the MSDN Library.
A static buffer overrun occurs when a buffer declared on the stack is overwritten by copying data larger than the buffer. Variables declared on the stack are located next to the return address for the function’s caller. Buffer overruns can also occur in the heap, and these are just as dangerous. The usual culprit is unchecked user input passed to a function such as strcpy, and the result is that the return address for the function gets overwritten by an address chosen by the attacker. Preventing buffer overruns is mostly a matter of writing a robust application.
Asserts to check external input.
Asserts are not compiled into retail builds. Do not use asserts to verify external inputs. All parameters for exported functions and methods, all user input, and all file and socket data must be carefully verified for validity and rejected if faulty.
Hard-coded user ID and password pairs.
Do not use hard-coded passwords. Modify the installer so that, when built-in user accounts are created, the administrator will be prompted for strong passwords for each account. This way, the security of the customer's production-level systems can be maintained.
Using encryption solves all security issues.
Encryption solves a small set of problems very well and frequently is used in ways that it was never designed for.
Canonical file paths and URLs.
Avoid situations where location of a file or a URL is important. Use file system ACLs instead of rules based on canonical file names.
Review all old security defects in your application.
Become knowledgeable about security mistakes that you made in the past. Frequently, code is written in repeated pattern. Therefore a bug one location by one person might indicate the same bug in others locations by other people.
Review all error paths.
Often, code in error paths is not well tested and does not clean up all objects, such as locks or allocated memory. Carefully review these paths and, as needed, create fault-injection tests to exercise the code.
Administrator privileges for your application to run.
Applications should run with the least privilege necessary to get the work done. If a malicious user finds security vulnerability and injects code into your process, the malicious code will run with the same privileges as the host process. If the process is running as an administrator, the malicious code runs as an administrator. For more information, see Developing Secure Applications in the MSDN Library.