Walkthrough: Creating a Threat Model for a Web Application
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
Threat Modeling Web Applications
Home Page for Threat Modeling Web Applications
Summary: This walkthrough shows you how a development team put threat modeling into practice. It describes how the team created a threat model for an Internet-facing Web application during the early stages of application design. The walkthrough describes what the team learned during the modeling process. It also highlights some of the initial hurdles that they encountered and how they overcame those hurdles.
Identifying Security Objectives
At the Whiteboard
Breaking Down the Application
Finding the Threats
Finding the Vulnerabilities
- Learn how a development team got started with threat modeling.
- See the common issues that can arise during a threat modeling activity.
- See how threat modeling can work in practice.
Conducting threat modeling meetings that work well and produce the desired results can be challenging, particularly if you are new to the threat modeling activity. Threat modeling can be difficult for a number of reasons. A common mistake is to spend too much time trying to solve problems instead of identifying threats. Another mistake is to spend too much time in the early analysis and fact-finding steps of the activity and fail to spend enough time on the most important step: threat identification.
The purpose of this narrative is to show you how a development team put threat modeling into practice and to enable you to learn by reading about the things they did well and the mistakes they made. It should also help to prepare you for your first threat modeling meeting by highlighting the types of questions and issues that can and often do arise. It describes how the team conducted their first threat modeling meeting, very early in their application development life cycle, before a detailed design had been completed.
Anyone who plans to participate in a threat modeling meeting should read this walkthrough to learn from the team's experiences. If you have not yet conducted a threat modeling meeting, this narrative should help you get started. If you have conducted or attended threat modeling meetings before, it should help you to improve subsequent meetings.
The application scenario covered by this walkthrough is a data-driven, Internet-facing Web application. The team has just completed gathering user requirements for the new application and has started to work on the architecture and design of the application. The team has not performed the threat modeling activity before.
Prior to the meeting, the architect who is the meeting chairperson, asked the team members to review the document, "How To: Create a Threat Model for a Web Application at Design Time," so they have some familiarly with the threat modeling activity. The team was told to bring the following information to the meeting:
- Relevant policies, regulations, and compliance issues
- Deployment topology
- Knowledge of the application's key roles, features, and security design
The team was told to bring notepads with them and the architect ensured that the most important toola whiteboardwas available in the meeting room. Before the meeting, the architect met with the business analyst, who walked him through the application's required functionality and key use cases. The architect also met with the system administrator and the network staff to update the upcoming application and to get some initial information regarding the deployment environment.
Note Useful input to the threat modeling activity include use cases and usage scenarios, data flows, data schemas, and deployment diagrams. While these items are useful, none of them are essential. All you need is knowledge of your application's primary function and architecture.
The following people attended the meeting:
- Architect. The architect organized and chaired the meeting. He wanted to create a threat model early in the design phase, to influence subsequent design decisions.
- Business Analyst. The business analyst was invited to the meeting to answer and clarify questions regarding the security objectives and primary use cases. Note that the business analyst is an optional attendee and might not be required if the architect can answer the questions.
- Developer. The developer wanted to understand the security implications of various design and implementation choices.
- Test Lead. The test lead was invited to the meeting because the architect wanted the threat model to help define the test strategy. The test lead wanted to know where to focus his security testing.
Note This meeting did not involve any operations or network staff. Make sure you know your operational constraints. If necessary, check with your IT staff about relevant corporate policies or other infrastructure constraints.
At the start of the meeting, the architect spent five minutes running through the agenda to make sure everyone understood the purpose of the meeting, what their respective roles were, and how the meeting would be run. The architect explained the following:
- Meeting purpose and goals
- Meeting guidelines
Meeting Purpose and Goals
The architect asked the team what they viewed as the important objectives for the meeting. The team identified the following objectives:
- To create a list of threats and vulnerabilities relevant to their application. This would help the team shape the application design.
- To raise the security awareness of the development team.
- To help cross-team communication.
- To bridge the gap between design and deployment (application and infrastructure).
- To identify areas of the security architecture that require more research.
The architect explained how they would organize the meeting. He wanted to begin with an architecture walkthrough to ensure that all attendees understood the application from the same perspective. He then wanted to briefly review the initial analysis and begin to identify the potential problem areas that warranted the most attention.
The architect outlined the following meeting guidelines:
- The meeting was to be limited to one hour.
- A single nominated person would take notes. In this case, the developer was asked to take notes.
- The team would record issues that needed to be taken offline for further discussion.
This was the first threat modeling session with the team; therefore, an hour seemed like a reasonable amount of time to get started and produce initial results. Because the appropriate team members were present, creating an overview of the application and breaking it down would be a relatively quick exercise.
The architect also ensured that all team members understood that the objective was not to identify security solutions; the objective was to identify potential threats and vulnerabilities. He also pointed out that it was important to not get into too much detail while working through steps 1, 2, and 3 of the activity. These are the steps concerned with identifying security objectives, creating an application overview, and decomposing the application. He stressed the importance of identifying what was known at the time but to not start designing new solutions during the meeting. He wanted the team to focus on what they know, what they do not know, and where more help is needed.
The architect specified the amount of time he wanted to spend on each step so that the meeting would complete in an hour. He wanted steps 1, 2, and 3 to be completed in approximately 20 minutes and the rest of the meeting time to be spent on threat and vulnerability identification. He also allowed time for a five-minute meeting wrap-up.
The architect began by asking the team to identify their security objectives. He asked them to think in terms of confidentially, integrity, and availability. He asked a number of questions: "What can we prevent?", "What do we care most about?", "What is the worst thing that can happen?", and "What regulations do we need to be aware of?"
He explained that by being clear about their security objectives, the team could focus subsequent analysis and threat identification on the items they cared most about protecting.
The team suggested a number of important objectives. They did not want user credentials to be stolen from either the credential store or the network. They did not want user profile data to be accessed by attackers or unauthorized users. They were also particularly concerned about ensuring the security of customer credit card numbers.
The test lead reminded the team about the importance of Web site availability by asking: "What about the guarantees we make to our customers about service availability?"
The team agreed that it in addition to protecting critical client data, they must also focus on possible application-level denial of service threats. After five minutes of discussion, the team felt they had considered the things they did not want to happen and the key client data they needed to protect, so they moved on.
Next, the architect went to the whiteboard and sketched the application's high-level architecture.
Initially, this showed the Web server, database server, perimeter, and internal firewalls, and the communication channels linking the servers together (see Figure 1). During this activity, the focus was on authentication, authorization, and communication.
Figure 1. Preliminary whiteboard diagram
The architect also identified the main technologies they would use. They included IIS and ASP.NET on the Web server and a SQL Server database. TCP/IP would be used to communicate with SQL Server. The architect also illustrated the fact that the presentation, business, and data access logic would all reside on the Web server.
What Does the Application Do?
At this point, the architect paused to review some of the initial analysis work that had been completed before the meeting. He described the main features of the application, based on his earlier discussions with the business analyst. He explained that anonymous users must be able to view, search, and browse the company's product catalog. He also described how new users must be able to create new accounts and existing users should be able to tailor their account profiles, express preferences, change passwords, and so on.
After reviewing the initial analysis, the architect began to consider some of the key security mechanisms that the application would use.
The team began by examining authentication. The test lead asked, "When do we need to authenticate users?" The architect explained that the system would allow anonymous Internet users to browse various areas of the Web site, including the page that displays the product catalog. Users needed to be authenticated only when they were ready to purchase products and perform the check-out function. The plan was to use Forms authentication against a SQL Server user store on the internal network. The user store would maintain end user account details, including user names, passwords, and role information.
The architect noted that they would need to consider authentication threats and how they were going to secure user credentials, but that they should do that later when they started thinking specifically about threats.
The developer wanted to know how the database would authenticate the application. The architect explained that the application would need to use Windows authentication to communicate with the database. SQL Server authentication was no longer an option because it had recently been disabled by the database administrator. This was part of the company's new corporate security policy. The developer noted that he would need to experiment with connecting ASP.NET to SQL Server using Windows authentication.
After reviewing authentication as it related to the application tiers, the team began to discuss authorization.
The architect asked, "How are we going to authorize users?," the team discussed role- based authorization and how the Web application would use .NET roles to control access to business functionality. They discussed the various types of users who would be allowed to access the system. This included members of the public who wanted to buy products and also catalog administrators who should be able to update product prices and add and remove new and existing product items. The team planned to use separate roles to differentiate these two groups of users, although they expressed concerns about exposing the catalog maintenance functionality through the customer-facing Web application. The developer noted this as a potential vulnerability.
The team discussed how the application account would be authorized in the database. The team felt that this would be done using a combination of database users and database roles, although they postponed further discussions on this subject until they could meet with the database administrator. The architect made a note to invite the database administrator to the next threat modeling meeting so they could pursue this topic in more detail.
After discussing the application's main functionality, and the key authentication, and authorization points, the architect wanted the team to start examining the next level of detail to identify trust boundaries, entry points, and data flows.
The architect began by asking the team to consider trust boundaries. The test lead interjected: "Hold on. What's a trust boundary?" The architect told the team to think in terms of data that is passed to the application: "Can you trust the data? Can you trust the caller? What privileges required to execute a particular operation? Which operation requires extended privileges? Where are the authentication points? Where are the access checks performed?"
The team identified the entry points to the privileged business logic where role-based access checks are performed as a trust boundary. The developer also noted that the database would need to be able to trust the Web application to properly authenticate and authorize users and to validate input data. The developer then asked, "Can our data access components trust the business components to validate data properly, or do we need the data components to validate data too?" The team decided that for performance reasons, they would prefer the business components perform complete data validation and the data components would need to trust the business components to pass validated data. While discussing this, they started to note the entry points. They realized that if an attacker was able to call the data components directly, the attacker could inject malicious data.
"What about external systems?" The test lead wanted to know whether the third-party credit card validation service should be considered inside or outside of the trust boundary. "Can the service be trusted?" At this point, the team knew little about this service because their application was the first to use it. The architect knew that it provided a Web service interface, so the key would be to ensure that the application was talking to the legitimate service and not a spoof service. The team talked about options for mutual authentication, but they noted that they needed more information from the service provider. The developer added that they would have to carefully consider how the credit card numbers passed to the service could be protected while they were being transmitted.
After a few more minutes of discussion about trust boundaries, the architect indicated that they had enough detail for now and that they should move on. He pointed out that as the design matures, they would be able to come back and add more information and identify more granular trust boundaries such as those between processes and components.
Next, the team considered entry points because these are principal attack targets. The team began by identifying the dynamic Web pages that they knew they would need. Although the detailed design was far from complete, the team knew they would require a main product page, a search page, a logon page, a new user registration page, a view shopping cart page, and a checkout page.
The developer noted these as primary entry points. They then started to consider inner entry points such as the data access component APIs and the stored procedures that they would be using in the database. They spent little time discussing this because their design had not yet specified which components would be needed and how they would communicate with one another. They would address this in more detail in a subsequent meeting.
The team went on to discuss exit points. The architect asked, "Where do we write data back out to the user or to another system?" The architect noted that they should pay particular attention to output that contained input or other forms of untrusted data. The test lead then asked, "Can we trust the database?"
The team agreed that they could make no assumptions about the data in the database because it was shared by other applications. They would need to carefully sanitize all data read from the database before writing it to the client. At a minimum, they needed to properly HTML-encode the data to mitigate the risk posed by script injection attacks and cross-site scripting. While they could perform further sanitization of this data, the test lead was concerned about the performance implications, particularly of sanitizing the product catalog data. The test lead agreed to investigate the performance implications of sanitizing the data.
The team identified the product catalog page, the checkout page, and the view shopping cart page as the main pages that retrieved and displayed data from the database.
At this early stage in the application design, the team had not yet considered which individual components would be required or how they would interact. They did not have any data flow diagrams or sequence diagrams that would have helped. However, they started by identifying data flows at a high level. They considered data flows from each of the identified Web pages and described how the data would pass through the system.
The developer asked about the third-party service: "What about data that flows to and from the credit card validation service?"
The team used the whiteboard to map the data flows to and from the third-party Web service. They paid particular attention to how the customer's credit card number was handled. They noted they would need to do further data flow analysis after their design had matured and they had identified processes and components. At that point, they would be able to examine data flows between components.
After gathering key information about the application, the team conducted a brainstorming session to identify relevant threats. The team started by using a list that enumerated common threats grouped by application vulnerability category. They also used the diagram of threats provided with the threat pick list. These approaches provided a simple yet effective way to identify potential threats.
For each of the vulnerability categories, the team systematically examined the application layers and considered the key functionality in critical areas, such as authentication, authorization, input and data validation, and so on.
The architect asked the team to consider threats related to their proposed authentication solutions. They already knew that they would use Forms authentication with a SQL Server user store. The developer started by asking "How are we going to store user credentials?" The team knew that when a user registered on the site, the user would create a user name and password. The architect explained that he did not want to store passwords or encrypted passwords because of the associated key management issues. The team concluded that they should store password hashes, although they were unsure about whether a brute force attack could easily decipher the hashes. They noted their first threat:
Threat 1: Brute force attacks against the credential store.
Next, the developer asked: "So what about protecting credentials over the wire?" The team identified where the credentials would be passed over the network. This included the Internet link between Web browser and Web server because they were going to use Forms authentication and the internal channel between the Web server and database server. The team realized that it would be relatively easy for an attacker to obtain the clear text credentials posted from the logon page.
The developer interjected: "Why do we care about this because we will use SSL?" The architect correctly pointed out that they should not dismiss threats at this point. He reminded themby identifying as many relevant threats as possible, including those that they knew their design would mitigatethat they needed an accurate security profile of their system, one that identified strengths and weaknesses. Then the team identified their second threat:
Threat 2: Network eavesdropping between browser and Web server to capture client credentials.
Next, they began to consider cookie replay and session hijacking attacks. The team began to consider whether it would be possible for an attacker to capture an authentication cookie and use that to spoof identity and access the application by impersonating an authenticated identity. Their early design did not state that SSL should be used to protect the authentication cookie in addition to the logon page, so they recorded their third threat:
Threat 3: Attacker captures authentication cookie to spoof identity.
Next, the team considered input attacks, such as SQL injection and cross-site scripting. They were aware of the importance of validating all input, but at this stage they were unsure of precisely how they would handle input validation. They noted two more threats:
Threat 4: SQL injection enabling an attacker to exploit an input validation vulnerability to execute commands in the database and access and/or modify data.
Threat 5: Cross-site scripting, where an attacker succeeds in injecting script code.
The Initial List of Threats
The team continued to examine the vulnerability categories. As they did this, they annotated their whiteboard diagram with the potential threats that they identified and the developer recorded the threats. After a further 10 minutes of discussion, their picture looked like Figure 2.
Figure 2. Revised whiteboard diagram
Within 15 minutes, the team had identified and noted the following threats:
- Brute force attacks against the dictionary store
- Network eavesdropping between browser and Web server to capture client credentials
- Attacker captures authentication cookie to spoof identity
- SQL injection, enabling an attacker to exploit an input validation vulnerability to execute commands in the database and access or modify data
- Cross-site scripting (XSS) where an attacker injects script code
- Cookie replay or capture, enabling an attacker to spoof identity and access the application as another user
- Information disclosure with sensitive exception details propagating to the client
- Unauthorized access to the database if an attacker manages to take control of the Web server and run commands against the database
- Discovery of encryption keys used to encrypt sensitive data (including client credit card numbers) in the database
- Unauthorized access to Web server resources and static files
The team recognized that this was just a start point, but that they could act upon these issues immediately based on what they knew.
After reviewing the bad effects that could happen, the team turned their attention to the known aspects of their current design to identify potential vulnerabilities. They did this by examining their application design layer by layer, considering each of the vulnerability categories at each layer. They began with authentication.
The team had previously noted a potential vulnerability in how they were planning to store user passwords. They were concerned about the potential for brute force attacks against the user store. This was noted by the developer. He then asked, "What's the likelihood of a user guessing another user's password?" Currently, their design had not considered if or how they would enforce complex passwords, which would significantly limit the likelihood of password guessing. The developer noted this as a potential vulnerability.
"What about retry attempts; how many times is a user going to be able to enter an incorrect password before the system locks him out?" This was another area that their design did not currently address, so the developer noted it.
The team went on to discuss the danger of denial of service attacks and whether their system should lock out accounts after a set number of password retry attempts. The team realized that they would need to give this area further consideration at a later time, so they recorded this as an action item.
Input and Data Validation
During their discussion of input validation, the team asked the following types of questions: "Is all input validated?," "How is it validated?," "Is it validated for type, length, format, and range?," "What does good data look like?," and "Where is it validated?"
Next, the team considered exception handling. They discussed the implications of inadvertently returning too much detail to the user if an exception occurred. They asked: "What information is needed for troubleshooting?" and "What information should be presented to the end user?" The team realized that they had not thought about their exception handling strategy, and they needed to design their application to ensure that they did not pass raw exception details back to the client or pass any details that would be useful to an attacker.
The Initial List of Vulnerabilities
By the time the team had reviewed the vulnerability categories, they had identified the following vulnerabilities:
- User password storage
- Lack of password complexity enforcement
- Lack of password retry logic
- Missing or weak input validation at the server
- Failure to validate cookie input
- Failure to sanitize data read from a shared database
- Failure to encode output leading to potential cross-site scripting issues
- Exposing an administration function through the customer-facing Web application
- Exposing exception details to the client
So what did the team get out of their meeting? They achieved a great deal in only a very short period of time. They produced:
- A structured and systematic view of their application at a particular point in time.
- An evaluation of the basic components and how they interact.
- A list of potential threats and vulnerabilities.
- Knowledge they would use to shape their subsequent security design.
Most importantly, they made a very good start at threat modeling and have a model that they can build upon for future meetings.
At the end of their initial meeting, the team agreed to the following next steps:
- The developer had planned on using SQL authentication to connect to the database. He now knew that this was not an option and that he must use Windows authentication. The developer needed to experiment with running Web applications in isolated application pools using a custom identity. He also needed to meet with the database administrator to discuss how to authorize this account in the database.
- The test lead needed to investigate the performance implications of HTML-encoding the product catalog output data.
- The team needed to review design decisions to determine what changes must be made based on their discovery of potential threats and vulnerabilities.
- The test lead wanted to review his test strategy and test cases.
- The team needed to review the issues that they had noted but had not been able to fully resolve due to a lack of information. They needed to determine what steps needed to be completed to provide a more robust and detailed threat model.
- The architect needed to schedule the next threat modeling meeting when the team could reconvene and continue to evolve the threat model based on their continued design.
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.