Applications evolve over time. New features are added, bugs are fixed, and security threats evolve. As such, it is necessary to periodically conduct reviews of security threats and application security services to ensure the application is not exposed to unwanted risks.
The following are steps you can take to preserve the security of your application:
Add security comments to your source code
During the design and coding phases, you made numerous decisions regarding the implementation of security in your application. Future developers that maintain your source code may or may not fully understand these decisions or the ramifications of modifying portions of your source code. You can lessen this risk by adding comments to your source code that identify your assumptions, the intent of the code, and any dependencies on external security measures (such as access control lists (ACLs) on files, authentication methods, and so on).
Regression test bug fixes
It is inevitable that you will identify bugs in your application after releasing it. It is also possible that fixing a bug will result in additional bugs appearing that were previously obscured by the fixed bug. Before releasing bug fixes, regression test your application to ensure the fix does not jeopardize your application's security.
Regression test platform changes
Just as bugs are inevitable in your software, the same is true for the platform your application runs on and the applications that your software interacts with. When patching a platform or applications external to your own, you must be aware of how those changes impact your application. For example, your application may unintentionally rely upon previously broken behavior in the platform as part of your security design.
Monitor support requests
When you designed your application, you made assumptions about how it would be used. To validate these assumptions, you should monitor support requests and discussion forums to evaluate real-world scenarios. For example, your application may require security settings that users are either unwilling or unable to implement, which can lead to an increase in support issues.
Develop a good auditing policy
A good auditing policy requires that you record events of interest that take place on your system and evaluate them in a timely fashion. Timely audit trails facilitate the pursuit of perpetrators. Delayed audit trails often lead to fixing the security problem when it is too late: after the perpetrator has completed all destructive actions.
Monitor not-found errors
The Web Service performance object in System Monitor includes a counter that displays not-found errors. Not-found errors are client requests that could not be satisfied because they included a reference to a Web page or a file that did not exist. (These errors are sometimes described by their HTTP status code number, which is 404.)
Many not-found errors occur because Web pages and files are deleted or moved to another location. However, some can result from user attempts to access unauthorized documents. (The code number of these "Access forbidden" errors is 403. Most browsers report them differently from 404 errors and they do not show up in the Not Found Errors/sec counter results.)
You can use the Web Service object's Not Found Errors/sec counter to track the rate at which not-found errors occur on your server. Alternatively, set a System Monitor alert to notify an administrator when the rate of not-found errors exceeds a threshold.
An increase in not-found errors can indicate that a file has been moved without its link being updated. However, it can also indicate failed attempts to access protected documents, such as user lists and file directories.
Do not store secrets
If at all possible, do not store sensitive information. Secret data management is usually one of the most challenging aspects of designing a secure system.
Encourage the use of least privilege
Design your application to require the least amount of access privileges. In doing so, you reduce the likelihood of your application being used by an attacker as platform from which to attack a computer. Also, if your application requires a user to log on using highly privileged account, such as one with Administrator privileges, you could expose the user to other avenues of attack.
Assessing damage potential
Microsoft uses the acronym DREAD when assessing potential security vulnerabilities. This acronym describes the following factors:
Each factor is graded on a scale of 1 (lowest) to 10 (highest). Based on the score for each factor, a subjective determination is made as to the overall rating of the vulnerability. Ultimately, the decision to fix it also depends on other factors that are difficult to objectify, such as impact on reputation.
If this vulnerability is successfully exploited, what is the worst that can happen?
Note The following scales represent sample definitions.
- 10 - The attacker gains full control of the computer.
- 2 through 9 - Each number represents increasing degrees of damage.
- 1 - The attacker can read or write limited amounts of harmless information, if any.
How easy is it to reproduce an attack on this vulnerability?
- 10 - Happens on every attempt.
- 2 through 9 - Each number represents increasing ease of reproducibility.
- 1 - Repeated testing results in rare occurrences.
How easy is it to attack based on this vulnerability?
- 10 - Requires little to no knowledge or time.
- 2 through 9 - Each number represents increasing ease of exploitation.
- 1 - Requires exhaustive effort and vast financial resources.
What percentage of users is likely to be affected by this vulnerability?
- 10 - 91-100%
- 2 through 9 - Each number represents increasing percentage of affected users (such as, 2: 10-19%, 3: 20-29%, and so on).
- 1 - 0-9%
How easy is it to find this vulnerability?
- 10 - Widely publicized information is available.
- 2 through 9 - Each number represents increasing degrees of publicity.
- 1 - So obscure that nobody is likely to ever learn of its existence.