Skip to main content

Success with Static Analysis

Peter Jenney, VP Security Innovation

Security Innovation

Security Innovation is a leading provider of software security assessment and training solutions.  Organizations rely on the company to identify risks in their software and improve the process by which it is built. 



Introducing tooling to the software development process is always challenging. New tools change the way developers work, generally forcing them to learn new techniques, and disrupting the development process. Change however, is a fact of life and as new problems arise in the creation of software, new tools are developed to reckon with them and disruption is inevitable.

The current problem is software security. The development world has come to the point where all of the band‐aid solutions are in place, things such as physical andapplication firewalls, but the hackers are still getting through. They are starting to realize that it's the code itself that's enabling attack, and that it's the responsibility of the development team to address it.

Addressing software security is a huge, resource consuming problem that involves looking at the way software is defined, designed, written, tested and deployed. While security touches each phase of the lifecycle, no one place has gotten more attention than static and dynamic security analysis as typified by Fortify and Rational/Watchfire. Of the two, Watchfire has generated the most buzz and revenue, but as it's a post facto analysis tool, it doesn't address the source of the problem, just the presentation. Static analysis looks at the source code itself and catches problems in the code before products are deployed, somewhat obviating the need for post facto analysis altogether, however, looking at the numbers of deployed post facto testing tools we see that they dwarf those of the static space1.

Catching and dealing with security defects earlier in the development lifecycle is naturally cheaper than dealing with them once the applications are deployed, and therein lies the paradox -- if it's cheaper and easier to address the issues en‐process, why do the post facto analysis tools dominate the mind space and market? There are two answers:

  1. Post facto dynamic analysis does not typically involve the development team directly, allowing the teams exposure to be through defect management systems rather than actual tacit interaction with the application
  2. Security is considered the realm of an actor outside of the normal development team who has specialized security engineering training unlike that which is available through the current global university system

Combined, these place the problem of security squarely out of the typical software development teams scope of responsibility and allow them to avoid the whole issue.

Changes in the static analysis tool market however are changing the game and making it possible to implement effective security analysis directly into the development process.

Until now the responsibility of security in the development practice has in fact been left to specialists and those specialists had their own sets of tools to allow them to do their jobs and provide the guidance that development teams needed to get the job done. This approach, while generally effective, is expensive in terms of resources and is difficult to smoothly integrate into a software team's process, and because it's required and expensive, the natural tendency is to look for cost reduction opportunities -- such as eschewing the expert and going straight for the tools they use. This approach may work in other areas of software development however in security, quality actually tends to DECLINE because the result data is outside the typical developers technical scope of knowledge, and provides them little or no context for determining the best course of action to mitigate issues. The current tool sets also tend to generate significant numbers of false positive reports that further exacerbate the problem by forcing teams to track down problems that don't actually exist.

Recently there have been fundamental changes in the static security analysis tool space that directly address the major issues that made developers shy away from the earlier tools: usability, efficiency and false positive reporting. These next generation tools are designed to integrate with normal software engineering workflows, to accurately report on security defects, and to suggest techniques for repair that fit the engineer's development and testing process. These tools, typified by CxDeveloper from Checkmarx, allow static analysis to integrate with the development teams IDEs and allow security analysis to take place as part of their normal iterative design, code, test, and analyze process. Integrating in this manner allows the users to solve real problems, and get smarter in the process as they gain insights to what secure code looks like and how to incorporate that knowledge into future activities.

Disruption caused by the introduction of new problems and concepts is inevitable. It increases the workload on all of the software team members and forces them to deal with new concepts. The best mechanism for mitigating disruptive impact on the team is to seamlessly integrate with the existing process, and truly improve the quality of the code being developed and the capabilities of the engineers using the tools; and this round of security tools from Checkmarx are doing it right.

Desktop Analysis

Historically, static code analysis has required a complete and buildable project to run against, which made the logical place to do the analysis at the build server and in‐line with the entire build process. The "buildable" requirement also forced the execution of the scan nearer the end of the development process making security repairs to code more expensive. Modern agile/iterative techniques require that testing is done inline and that whatever gets checked into the build system is solid, secure and plays nicely with all the other code in the build -- which suggests a need for static security analysis on the developers and testers desktop.

Ben Franklin said "take care of the pennies and the dollars take care of themselves." That same adage applies to anything that's made from pieces and can naturally be applied to modular software. In the case of static security analysis, if the code that a developer checks in is secure, the only places needing testing are the integration points with other modules; and if all of the code is tested with appropriate access to integration resources, even the integration issues should be mitigated.

Depending on the size of an organization, a developer may have access to the code for a complete project, or just that on which s/he works on with binaries and header files so they can build their own code. In either case, the opportunity to do a static security analysis exists, but it takes a system that's flexible and can be configured to make certain assumptions regarding the code it can't see, specifically trusting method calls and enforcing their proper usage.

For example, if there's a SQL query sanitization routine that has to be called before any database calls can be made, it should be enforced at the desktop, not at the system build. The ability to do the enforcement there not only improves the quality of the code, but also makes the developer smarter, and more likely to use the proper technique the next time it's needed. Here's an example:


// Proper usage (assume global excepion handler)

string[] getCardNumbers()


string name = sanitize(getNameInput());

string q1 = “SELECT cards FROM allcrds WHERE uname ' " + name + " ’ ”;

string q2 = q1;



// Improper usage (assume global excepion handler)

string[] getCardNumbers()


string q1;

q1 = “SELECT cards FROM allcrds WHERE uname = “ + getNameInput() + \”;”;



We'll be coming back to this code later because it not only demonstrates a good security defect, but it also shows the need for a very complex and configurable mechanism to discover it.

Desktop Tooling Enables In - Line Software Security Testing

Checkmarx CxDeveloper is an excellent example of static security analysis software for the software engineers desktop. It provides the scanning and analysis utilities that developers and testers need in near real time and provides a rich environment for analyzing security defects and pinpointing the places in the code that need repair.

The interface is clear and intuitive and provides a unique view of vulnerabilities as a path, which demonstrates where a vulnerability is accessed all the way through to where it is presented. This path based methodology, while clear and easy to understand visually, is also the reason that CxDeveloper reports so few false positives -- specifically, it only reports on conditions that can be absolutely proven, rather than reporting on things that could possibly be a security problem like the rest of the static analysis tooling on the market. The combination of accuracy, usability and availability on the developers and testers desktop make it possible to distribute the process of source code security analysis and dramatically improve the quality of software without dramatically affecting the efficiency of the development process. Consider a development team of 20 individuals working on a client/server project. The teams are divided into two teams, one for the server and one for the client. Each team member has complete access to the code for their system, but some choose to work with just the object files for the code that they don't work on. At the end of each workday, the build manager kicks off a nightly build that includes a static security analysis and a general smoke test and other automated test scripts. The following morning, problems that were reported by the build are triaged and logged for repair by the appropriate developer.

Scenario 1: Centralized Static Analysis

Using first generation server based static analysis tooling such as Fortify, the code to be analyzed all needs to compile and link, thereby forcing the system build to run first, followed by any other tests or automated activities. Should the system fail to build that night for one reason or another, the static security analysis will not happen, and any security that might have been discovered in the run are not logged in the mornings triage. The risk in this scenario is that builds may fail often and key security vulnerabilities are not being reported. The defects that are reported may be false or they may be very complex and require significant effort to research and repair, and as the process draws nearer to the ship date, security defects get pushed into a future release.

Scenario 2: Distributed Static Analysis

Using next generation desktop based static analysis such as Checkmarx CxDeveloper. There are several efficiencies that are realized. First, the code that CxDeveloper analyzes does not have to compile or link. Developers can point at just the code they're currently working on and run analysis' to uncover security vulnerabilities. Secondly, the result of the scan is a visual representation of any defects that are discovered that make it immediately obvious what needs to be repaired and the system provides suggestions on the best way to proceed, hence the time to repair is reduced, and the developer has gained knowledge that may be applied later. Finally, if there are specific interface use rules defined by the organization, those too are tested and reported, ensuring that the all the local code is secure right up to the interface with other modules. The risk in this scenario is that the developers do not do the scans as required and security defects get into the build. This however is simply mitigated by either forcing an automated scan to take place on the build server at check‐in or including a complete module by module system scan as part of the QA process.

Both scenarios are good because they acknowledge software security as an issue that must be dealt with during the software's development; however the distributed mechanism described in scenario 2 is superior as it leads to more secure code, fewer opportunities to push defects to future releases, and makes the development staff smarter while solving problems with the tool.

Microsoft Visual Studio Integration Enables Inline Scanning

Getting the proper tools onto the desktop is the key to minimizing security defects in the development process. As has been demonstrated repeatedly in various areas, getting problem location and repair tools into the developers integrated development environment (IDE) improves the quality of the product and the efficiency with which it is developed. Examples are many but a few significant examples are:

  • Integrated debugging
  • Compiler error and warning linkage to code
  • Embedded programming language documentation
  • Integrated database development tools

In each of these cases, they were originally stand alone utilities and integration meant that the software engineer did not have to leave the environment and/or configure another to do the specialized work, thus streamlining the

process and improving quality and efficiency. Security analysis tooling is no different, and the same quality and efficiency gains are found by integrating with the IDE.

Again, Checkmarx CxDeveloper is a fine example of successfully leveraging the Visual Studio IDE to embed static security analysis into the engineer's normal activities. Code scans are executed directly from the solution panel where most other highly used options are so the activity is non‐intrusive and follows the developer's logical flow. Security query results are displayed in their own results window and provide the same rich path navigation and guidance available in the stand‐alone UI; and add the ability to edit the text in real time in the same way as s/he would while working with the normal Visual Studio Error List window and providing the same click through to error support as always. Checkmarx effectively allows developers to seamlessly integrate static security analysis into their normal activities without needing to change the Visual Studio paradigm, and improve the security, and arguably the overall quality of the code they develop.


Continuous Integration For Quality

Giving developers the tooling they need to do effective software development on their desktop is a dramatic advance in the state of secure application development, but just having the tooling isn’t always enough. Continuous integration is a mechanism for reinforcing the best behavior in software developers by encouraging them to check in good code that works well with the code that other developers write. The process is simple and is tied to the check‐in phase of the development process. Simply, when code is checked into the team’s version control system it is instantly built and linked with the rest of the code, and then run through a series of automated tests to ensure that the code behaves properly.

Figure 2 - Conceptual Continuous Integration

Next generation static analysis tools such as those from Checkmarx not only allow code to be scanned, but also the development of specialized queries to enforce compliance or discover additional security of functional defects in the code, and distribute those queries to the members of the development team. Earlier in this paper we identified a code snippet and mentioned it would be important later. Lets have another look at it now.

If it does not, it’s known as Breaking The Build and the developer must immediately back out the changes that were checked in to restabilize the build. There aretypically negative feedback items that occur ranging from an email pointing the finger at the offending developer, the requirement to bring doughnuts to for the group the next day and perhaps, very occasionally, a public flogging. Regardless of the feedback’s form, it does encourage developers to test their code prior to checking it in, which has an overall positive effect on the quality of the product.

Breaking The Build For Security

Application security is rapidly being recognized as a fourth element of quality, along with stability, usability and performance. In robust continuous integration environments, quality in general is tested and that naturally leads to testing application security. The most logical mechanism is to test from the source code out, rather than from the built application in simply because all the code is accessible, hence static analysis. For many of the same reasons that doing effective static security analysis on the desktop has been a challenge, doing it at check in is difficult. On check‐in, running an entire static analysis requires that the project builds; of course that’s a requirement of the continuous integration environment itself, but it can be expensive time‐wise and is completely unnecessary. On check‐in, the code that needs to be tested is the code that’s being checked in, so an obvious optimization to the security testing process is just testing that code, and not all the rest. Checkmarx provides a tool named CxConsole which does just that. It is a CLI2 program that can be integrated with the normal check in scripts and run early, before the code is even build even starts. The result is an email to a predefined team that points to the result of the run should it fail. Beyond breaking the build, the distribution of the run results will allow developers and managers to quickly see and provide positive feedback to the developer breaking the build, perhaps mitigation guidance or a good shared example for reuse; regardless, the static analysis of just that developers code breaks the build and never gets past that point, and the security defect is addressed.

Figure 3 - Security Optimized Continuous Integration

Combining static analysis on the desktop with a robust continuous integration environment allows the static security analysis to be seamlessly integrated into the development process and its use reinforced in a manner familiar to most agile development teams—a double win.

Nightly Builds For Complete Testing

Many organizations rely on a nightly build to do all the needed testing on the code. Typically a build manager will call for all code to be checked in at the end of the day and then kicks off the entire process which does the build and executes a suite of smoke, functional and performance testing. Upon completion, the build manager and a triage team will go through the build results and log any defects found into the team defect management system and get on with the day. Security testing may be included in this process as well and historically will run against the code from a built application. The obvious problem in this scenario is that described in Scenario 1: Centralized Static Analysis where the risk of multiple failed builds could push security scans out of the process altogether or return so many false positives as to make the results undesirable. Checkmarx CxConsole is the answer in this case. Regardless of whether a project builds or not, CxConsole will run an analysis and provide accurate results that can be immediately used. The increase in efficiency in the nightly build will resist the tendency to drop security from the released product and help deliver higher quality applications than would be otherwise possible.

Complete System Scans

Quality Assurance means many things in the software development world and regardless of what it’s called, software security is part of the job. QA teams may need tools that will allow them to completely test code and allow them to discover security vulnerabilities, regulatory compliance issues or perhaps organizational coding standard adherence. Using modern static analysis tooling, teams should be able to check out source trees and scan them to watchdog the code in general, but they also need to be able to create custom rules that identify non‐ compliance and report them as defects. 

Next generation static analysis tools such as those from Checkmarx not only allow code to be scanned, but also the development of specialized queries to enforce compliance or discover additional security of functional defects in the code, and distribute those queries to the members of the development team. Earlier in this paper we identified a code snippet and mentioned it would be important later. Lets have another look at it now.

// Improper usage (assume global excepion handler)

string[] getCardNumbers()


string q1;

q1 = “SELECT cards FROM allcrds WHERE uname = “ + getNameInput() + \”;”;



The rule in this mythical organization is that no SQL code can get to the database without being sanitized by a specific routine. A first generation static analysis tool would have to generate patterns for every way that the rule could be violated and would end up with a large number of false positives. Next generation tools like CxAudit employ a query language that allows real time analysis based on control and variable flow through the system. So, to specifically capture all non compliant cases, the risk analyst could create the following query:

CxList input = All.FindByName(“*Input”);

CxList clean = All.FindByShortName(“sanitize”);

CxList execute = All.FindByShortName(“callDatabase”);

result = execute.InfluncedByAndNotSanitized(input, clean);

Assuming for the sake of this example that all "input" items contain the word input, this query finds every place that data is collected, every place that data is sanitized and every place where it is sent to the database, and then does a complete path correlation analysis to determine if any data items are not being sanitized. In this case, it would find the lines:

q1 = “SELECT cards FROM allcrds WHERE uname = “ + getNameInput() +



because there is no sanitization routine in the path. It would not find any other errors as the appropriate sanitization rules are followed. Queries such as this can be created and added to collections that software teams execute from their desktops, during check‐in, during nightly builds or during general security code reviews. This ability allows teams to "control their own destinies" as it were because they have to tools to create the checks they need without having to go back to the vendor for custom code.

Performance Metrics

Using next generation static security analysis tooling has several advantages including accuracy and flexibility, but many times the advantages come at a cost. The fact of the matter is that the next generation systems eschew basic parsing and rule matching algorithms in favor of lexical/path analysis and query languages -- both of which are compute intensive and impact the speed at which source code can be scanned. Regardless of the processing speed though, next generation systems will be faster and more cost effective overall generally due to the accuracy of the results, specifically the reduction of false positives in the results. False positives are an artifact of parsing and rule matching

systems and are very difficult to mitigate. The costs associated with verifying results are measurable both in terms of cost and time, but also less tacitly, in the trust users have in the systems.

Basic Static Analysis Runtime:

Cost = ((ct*cc) + (fp *ht))

where: ct = compute/machine time

cc = total compute time cost

fp = false positives

ht = human analysis cost (time or $$)

The speed gains in using next generation systems are dramatic and when applied to modern development methods, reduce the total cost of ownership of the tools fairly dramatically. The additional load on the build system is real in any case, so care will need to be taken to configure build servers with the horsepower to process multiple scans or with efficient queue management.


Software security requires that software teams create secure code and validate that the resulting executables are not vulnerable. Penetration testing executables is a good practice to ensure that the code was actually built correctly, but it is no substitute for doing it right in the first place. Static security analysis tools allow development teams to locate and mitigate security issues during the development process which not only leads to less vulnerable code, but is also cheaper because it locates the problems early in the development cycle where it's inexpensive to repair. First generation static security analysis tools such as those from Fortify and Ounce allow software teams to rapidly locate security defects in source code but they also result in a great deal of false positive reports that cost time and money to validate, and minimize the programs utility. Next generation static security analysis tools such as those from Checkmarx, dramatically reduce the false positive rates found in the first generation tooling, and integrate with existing developer and testers tools, allowing them to treat static security analysis as an integral part of the development process and not force undue overhead while enforcing the development of secure software.

Static analysis, like software development in general, is an evolving science and is not yet perfect; however, the current crop of security focused tools make it much harder to create insecure applications and is another step in ensuring user data is safe in our modern world where software is ubiquitous.