|Important||This document may not represent best practices for current development, links to downloads and other resources may no longer be valid. Current recommended version can be found here. ArchiveDisclaimer|
Guidelines for Checking in Quality Code
The following list provides several guidelines for checking in quality code.
Insist on quality check-in.
Do not accept low quality during code check-in; it leads to problems later in the product cycle. Realize that teams usually do not fix problems that are too complex, too obscure, or are discovered too late in the product cycle.
Track the kinds of mistakes that you typically make and use them as a checklist for future code. You can start your checklist with common errors that your group or division made, and then personalize that list for your use.
Conduct code reviews.
Code reviews give you an opportunity to explain and better understand your own code, and give others an opportunity to view your code anew.
Write unit tests.
The best way to ensure quality is to write tests that validate data and algorithms and verify that prior mistakes do not recur. There are four kinds of unit tests:
Positive unit tests exercise the code as intended and verify the right result.
Negative unit tests intentionally misuse the code and verify for robustness and appropriate error handling.
Stress tests push the code to its limits, hoping to expose subtle resource, timing, or reentrant errors.
Fault injection tests expose error-handling anomalies.
For more information, see How to: Author a Unit Test.
Use code analysis tools.
The simplest way to catch bugs early is by increasing the warning level in your compiler and using code analysis tools. The critical points are to never ignore a warning and fix the code.
Do not use inappropriate language in your source code.
You must not have any inappropriate language and references in your source code. Many customers around the world are highly sensitive to certain phases or names, particularly references to political entities whose status might be in question. Search your source code for politically sensitive references and language and then report any errors.
Create work items.
Do not forget unfinished work; be sure to create work items for TODO, REVIEW, BUG, and UNDONE comments. For more information, see How to: Create Work Items.
Features without a specification.
Do not write code without a specification. First, write a specification and have it reviewed. Without a specification, it is impossible for your test team to know what is working right and what is not. If you code without a specification, you could misunderstand each other, misunderstand what your customer wants, and ship a poor quality product. For more information, see Team Foundation Team Projects.
Reaching the middle of first milestone without product setup in place.
The test team must have some way to set up the product on their computers, even though it might only be a prototype setup. For more information, see Team Foundation Build Overview.
Use a consistent coding style.
When your entire team codes in the same style, your product gains readability, consistency, maintainability, and overall quality. The specifics of the guidelines themselves are not so important. What is important is to establish some guidelines and to make your team faithfully follow the guidelines. The major benefits that accrue from picking any style are consistency and ease of recognizing coding patterns. So pick a style and use it.
Write unit tests before you write the code.
Test-first development is a key methodology from agile development and extreme programming. You accomplish several quality goals by writing unit tests first:
You ensure that unit tests are written.
You ensure that your code can be tested easily, which frequently leads to better code cohesion and looser coupling between modules.
You often discover the appropriate design for your work by first determining how the design should be tested.
For more information, see How to: Author a Unit Test.
Make your code portable to other platforms.
Designing and coding for portability makes your code more robust, even if you never intend to actually ship your code on another platform. When you make code portable, you:
Make better assumptions.
Are clearer about data types and design intent.
Ensure that your code is more capable to support new platforms in the future.
Refactor existing code into smaller functions.
Refactoring can bring new life to old code. Trying to fix large old systems can be difficult, because some are so complex in their interactions that you are reluctant to change even a comment.
To refactor successfully, you must first incorporate strong unit testing to ensure that refactoring does not introduce new errors. Next, separate large functions into collections of smaller functions without changing functionality at all. The guidelines are:
Each smaller function should perform just one type of task, such as user interface, database access, COM interaction to a single interface, and so on.
After you have completely refactored all the functions in a subsystem, you can change individual small functions without affecting the entire system. You can add, remove, or enhance functionality one function at a time.
For more information, see Refactoring Classes and Types.
Review existing code.
Bugs often congregate in specific modules. When your new code is clean but there are certain modules of your old code that harbor bugs, review just those modules. If new code is too intertwined with old code, refactoring can often help resolve issues. For more information, see Detecting and Correcting C/C++ Code Defects and Detecting and Correcting Managed Code Defects.
Step through all code paths in a debugger.
One of the best ways to verify code is to step through it in a debugger. At each line, verify that what you expect to occur is actually happening. To step through all code paths in a debugger is like a line-by-line unit test. The procedure is tedious, but it effectively verifies expected behavior.
A requirements document as a specification.
Do not try to interpret specification from a requirements document. Your interpretation might be different from the program manager’s or the tester’s interpretation. If your interpretation is different, your implementation will not measure up to what others expect.