Export (0) Print
Expand All

Building and Configuring More Secure Web Sites

 

December 2002

Authors:

Timothy Bollefer, Girish Chander, Jesper Johansson, Mike Kass, Erik Olson—Microsoft Corp.

Scott Stanfield, James Walters—Vertigo Software, Inc.

Summary: A Web site built by Microsoft engineers using the Microsoft .NET Framework, Microsoft Windows 2000 Advanced Server, Internet Information Services 5.0, and Microsoft SQL Server 2000 successfully withstood over 82,500 attempted attacks to emerge from the eWeek OpenHack 4 competition unscathed. This article explains how the solution was built and configured and provides best practices for software developers and systems administrators to secure their own solutions. (20 printed pages)

Contents

Introduction
Web Application
Internet Information Services (IIS) 5.0
Windows 2000 Advanced Server Operating System
IP Security Standards (IPSec) Policies
Remote Management & Monitoring
SQL Server 2000
Passwords
Conclusion
For More Information

Introduction

In October, eWeek Labs launched its fourth annual OpenHack online security contest. The year's contest, the third year of participation for Microsoft®, was designed to test enterprise security by exposing systems to the real-world rigors of the Web. Both Microsoft and Oracle were given a sample Web application by eWeek and were asked to redevelop the application using their respective technologies. Individuals from throughout the United States were then invited to attempt to compromise the security of the resulting sites in exchange for cash prizes. Acceptable breaches consisted of cross-site scripting attacks, dynamic Web page source code disclosure, Web page defacement, posting malicious SQL commands to the databases, and theft of credit card data from the databases used.

Microsoft developed its application using the Microsoft® .NET Framework, an integral Windows component that supports building and running the next generation of applications and XML Web services. The application was hosted on Microsoft® Internet Information Services (IIS) 5.0 and used Microsoft® SQL Server™ 2000 as its database. All of the servers ran on Microsoft® Windows® 2000 Advanced Server operating system. (It should be noted that Microsoft® Windows Server 2003 with IIS 6.0 would have been used had it been released at the time of the contest. In Windows Server 2003, several of the steps we took to "lock down" the operating system and Web server are already completed by default.)

The results of the competition may be found at http://www.eweek.com/. In total, the Microsoft solution withstood over 82,500 attacks. Microsoft emerged from OpenHack 4 unscathed, as it did in its previous engagements with the first and second OpenHack competitions. This article discusses each of the technologies used to explain how the solution was built and configured, and how developers and administrators securing their own solutions may apply these best practices.

Web Application

The application itself was modeled after the eWeek eXcellence Awards Web site, where individuals can nominate their companies' products or services for an award. Using the site, one could set up an account to enter a product or service for judging, submit a credit card number to pay the entry fee, and gather information on the award itself. Microsoft built its solution using the .NET Framework, an integral Windows component for building and running applications and XML Web services. Most of the development centered around the Framework's ASP.NET, ADO.NET, and cryptography class libraries that provide, respectively, functionality for building Web-based applications; accessing and manipulating data; and encrypting, decrypting, and ensuring the integrity of data.

Forms Authentication

The Microsoft® ASP.NET classes provide several options for authenticating users (i.e. using some credentials—a username and password, for example—to confirm the identity of a given user). These options include integrated Windows authentication, basic, digest, Microsoft® .NET Passport, client certificates, and so on. Per eWeek's request, forms-based, or custom, authentication was selected for the OpenHack solution.

When a user logs in via forms authentication, an encrypted cookie is created and is used to track the user throughout the site. (Technically, a cookie is a text-only string generated by a Web site entered into the memory of the user's Web browser, for the purpose of identifying that user as he or she navigates through the site.)

If the user has not logged in and requests a secured page, then the user will be redirected to the log-in page. All of this can be configured simply using the application's XML-based Web.config file, which is automatically generated by Microsoft® Visual Studio® .NET—an integrated development environment for building .NET Framework-based applications—to store configuration for ASP.NET Web applications.

In our application's root folder, we added the lines below to the <system.web> section in our Web.config file to request forms-based authentication and specify the location of the log-in page.

<authentication mode="Forms">
   <forms loginUrl="Login.aspx" name="OPSAMPLEAPP"/>
</authentication>

This top-level configuration file applied to all pages in the application. We then created a sub-directory with a second Web.config file. This file applied only to a few select pages in the application and prevented unauthenticated (i.e. anonymous) users from accessing them. This second .config file inherited the authentication information from the top-level .config file.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>    
  <system.web>
       
    <authorization>
      <deny users="?" />
    </authorization>
        
 </system.web>
</configuration>

By using the two .config files in this manner, unauthenticated users could only access the home page and a few other pages, while authenticated users were given additional access to those pages which required a user account on the site.

The log-in page itself contained fields which accepted a username and password from the user and sent that back to the Web server over Secure Sockets Layer (SSL) which prevented someone from "sniffing" the credentials as they passed over the network. When a user created a new account, this new password would be encrypted using the Triple DES algorithm, as explained in the Storing Secrets section, and stored in the database along with the username. With future log-ins, the password sent in from the log-in page would be encrypted using Triple DES by the Web application and then compared against the encrypted password stored in the database. If the two matched, the Web application would generate an encrypted cookie containing the user's username and full name using the System.Web.Security.FormsAuthentication class in the ASP.NET library. The cookie would be transmitted back to the user and stored in the user's browser until it timed out. It would then be sent along with any future requests from that user to the Web site. All transmissions involving the cookie were conducted using SSL to prevent "replay" attacks, in which an attacker sniffs a cookie off the network and then uses that to impersonate the user. SSL is highly recommended in any situation where sensitive information or credentials that could be used to access sensitive information is being sent across a public network.

Input Validation

OpenHack implemented different types of validation at different levels within the application in order to ensure that input from outside the code (i.e., user input) could not be used to change the behavior of our application. Validating input is a key security best practice and helps protect against buffer overruns, cross-site scripting attacks, and other potential attempts to execute malicious code within the context of the application. Providing several layers of protection as we did here is another important best practice known as "defense in depth." It is always critical to plan for the worst in this manner and assume that there is a possibility that one or more tiers of the solution could be compromised.

Our first line of defense was the validation controls provided by ASP.NET, specifically the RegularExpressionValidator class and the RequiredFieldValidator class that ensured that all required input was present and was valid data for the purpose at hand. We allowed only characters that were needed to provide the desired user experience, which in our case was a fairly narrow set. For example, some fields only allowed "[ ',\.0-9a-zA-Z_]*", equating to spaces, apostrophes, commas, periods, letters, and numbers. Other characters, which might have been used to send malicious script to the Web site, were not allowed.

In addition to the text boxes, our application accepted some input via a query string, the portion of a dynamic URL that contains parameters used to generate a page. This data was validated with regular expressions, using the functionality provided by the System.Text.RegularExpressions.Regex class, as shown below:

Regex isNumber = new Regex("^[0-9]+$");
if(isNumber.Match(inputData) ) {
    // use it
}
else {
    // discard it
}

Regular expressions are sets of symbols and syntactic elements used to match patterns of text. In the case of the OpenHack application, they were used to ensure that the content of the query string was appropriate and non-malicious.

All data access in the application was done via parameterized stored procedures, which were developed using the T-SQL language and which, by definition, live within the database itself. Limiting interaction with the database to stored procedures is always a best practice. In the absence of stored procedures, SQL queries would have to be dynamically constructed by the Web application. If the Web tier was compromised, an attacker could then inject malicious commands into the database query to retrieve, alter, or delete data stored in the database. With stored procedures, the Web application is limited in its interaction with the database to a couple of specific, strongly typed parameters it can send in via the stored procedures. Whenever a developer invokes a stored procedure using the .NET Framework, the parameters sent to that stored procedure are checked to ensure that they are of the types (e.g., integer, 8-character string, etc.) expected by the stored procedure. This is another layer of protection on top of the Web-tier validation in ensuring that all input data is of appropriate format and cannot be construed as an actionable SQL statement in its own right.

Before any data is returned to the user, it is HTML encoded. This is done simply using the HtmlEncode method in the System.Web.HttpServerUtility class, as show below.

SomeLabel.Text = Server.HtmlEncode(username); 

HTML encoding helps to prevent cross-site scripting attacks. In the event the attacker compromised the database, he or she could enter script into the records that would be returned to the user and executed in the browser. With HTML encoding, most script commands are automatically translated to harmless text.

Storing Secrets

It is critical to safely store secrets—such as the database connection string that provides database log-in information—to prevent attackers from accessing and using such secrets to read or manipulate your data or reconfigure your solution. The value of connection strings to attackers was greatly reduced in this solution by the fact that we used integrated Windows authentication to access the database; the string contained only the server location and database name rather than specific credentials, such as a password.

By default, the database connection wizards in Visual Studio .NET will store the connection string as a property value in the "code-behind" file—the file that contains the core logic of the application, in contrast to the file providing the user interface definition.

This provides convenient access to the string for developers. However, if an attacker managed to log in to the physical machine containing the source code and the .config file, the connection string may be read and used to access the database for malicious purposes.

In a production environment, we always recommend protecting the connection string and any other credentials you may need. One approach to protecting credentials is the approach we took in OpenHack 4: encrypting the connection string, storing it in the registry, and using Access Control Lists (ACLs) to ensure that only administrators and the ASPNET worker process (defined in the IIS section) have access to the registry key.

We encrypted the database connection string using the Windows 2000/XP Data Protection API (DPAPI) functions CryptProtectData and CryptUnprotectData, which provide us with the ability to encrypt secrets without having to directly manage (or store) the keys then needed to access those secrets.

While the DPAPI is excellent for encrypting user- or machine-specific data, it is less effective as a way to encrypt information stored in a shared database, such as credit card numbers and passwords. This is because the DPAPI functions create and internally store encryption keys based on the local machine and/or user information. In a Web farm scenario, Web servers would thus have their own encryption keys preventing them from being able to access the same encrypted data.

Therefore, in order to demonstrate the practices that would be used in a Web farm scenario, we generated a random Triple DES encryption key and initialization vector. This functionality was provided using the TripleDES class in the System.Security.Cryptography classes of the .NET Framework. The keys were used to symmetrically encrypt password and credit card information that was stored in the database. For storing credit card information, a cryptographically strong random first block was chosen as a salting technique.

After making a backup copy of the keys, we encrypted the keys themselves using the DPAPI and stored them in the registry, again using ACLs to limit access to administrators and the ASPNET worker process. Encrypting the keys ensures that should an attacker actually locate and access the data, the attacker won't be able to decrypt the data without first decrypting the keys. This is another good example of "defense in depth."

Internet Information Services (IIS) 5.0

We made several moderate changes to the Internet Information Services (IIS) 5.0 Web server included in Windows 2000 Advanced Server to help prevent against attacks directed at the Web server itself. To begin, we installed all public security patches listed on the TechNet Web site to ensure we had the latest enhancements. Installing the latest Service Packs and patches is a critical security practice when running any piece of software.

Next, we changed the default Web site location on disk from the default location at c:\inetpub\ to another volume. Thus, should the system be compromised in some way, the attacker would have difficulty navigating the directory tree without actually seeing it firsthand—the attacker could not simply access the c drive by entering ..\ as the location description.

Next we ran the IIS Lockdown tool using the included template for a static Web server. This removed all other dynamic content types, which weren't used in this application. It is always an important best practice to reduce the surface area exposed to a potential attacker in this manner. The IIS Lockdown tool is freely available. It is a wonderful resource and should be used by all administrators running IIS.

At this point, we installed the .NET Framework Redistributable, which is required to run .NET Framework applications, the .NET Framework Service Pack 2, the latest .NET Framework hot fix, and MDAC 2.7, a component required by the .NET Framework.

In this scenario, our application only used dynamic files with the .aspx extension and a few static content types for images and style sheets. Since we didn't require the other IIS application mappings that were installed by the .NET Framework, we rebound those extensions to the 404.dll extension included with the IIS Lockdown tool. Again, this was done to reduce the exposed surface area of the solution.

The application used the low-privileged default local service account (the ASPNET account) for running the ASP.NET code. The principle of "least privilege" is important for all administrators—never give an account more privileges than it absolutely requires. Locking the solution down in this manner is analogous to reducing the surface area exposed.

(The ASPNET account is created as a local account upon installation of the .NET Framework Redistributable and belongs only to the "Users" group on that machine. It therefore has all privileges associated with the "Users" group and can interact with any resources to which the group is granted access. In addition, it receives, by default, specific full-access permissions to the Temporary ASP.NET Files directory and to %windir%\temp, as well as a read permission to the Framework installation directory.)

We added this ASPNET account to the local "Web application group" created by the IIS Lockdown tool to prevent the process from running any unauthorized command line executables in the event of a breech.

We then modified the permissions for this group and allowed users in this group to run the .NET Framework C# compiler and resource converter (Csc.exe and Cvtres.exe) which were required by the application.

The IIS Lockdown tool installed URLScan 2.5, an ISAPI filter that monitors and filters all incoming requests to the IIS Web server based on rules such as query length and character sets. We configured URLScan to only allow the set of extensions we used in the application and used it to block long requests. This is another example of defense in depth—an extra layer of protection against attempts to insert malicious code via user input. URLScan is freely available on TechNet as well as in the IIS Lockdown tool. Like the IIS Lockdown tool mentioned earlier, URLScan is a wonderful resource and should be used by all administrators running IIS.

We set appropriate permissions for the Web content directories, which granted the ASP.NET process read access to the content files and the anonymous user appropriate read-only access to content that was served.

We restricted access to the log directories for IIS and URLScan to the System account and members of the Administrators group. Constraining access to the log files is always a good idea, making it difficult for attackers to alter them to cover their tracks and hide any potentially useful information about a vulnerability exploited.

Windows 2000 Advanced Server Operating System

The servers used for the contest all ran the Windows 2000 Advanced Server operating system with Service Pack 3 installed, which was the most recent Service Pack at the time of the contest. All security patches issued since Service Pack 3 and on the TechNet Web site were installed. Again, keeping up with the latest security patches is a critical best practice for systems administrators.

Once these updates were installed, several configuration changes were made to further enhance operating system-level integrity. First, all unnecessary operating system services were disabled. This is also always a best practice. By turning off these services, we were able to free up system resources and reduce the surface area exposed to attacks. The specific services that can be disabled will vary depending upon the needs of each individual solution. Messenger, Alerter, and ClipBook are just a few examples of the services we disabled.

We strongly recommend reading the Windows 2000 Server Resource Kit to help determine what services you don't need. Then do proper testing to ensure your application can function perfectly without them. Finally, turn off these services by changing their start-up status to disabled.

In our application, we also used the Registry Editor (Regedit.exe) to change four registry settings to further tighten security. We recommend all of these as best practices, provided that you don't require the functionality being disabled.

  • Create Registry Key: nolmhash
    (It is important to note that this is a key in Windows 2000 and a value in Windows XP and Windows Server 2003.)
    • Location: HKLM\System\CurrentControlSet\Control\LSA
    • Purpose: Prevents the operating system from storing user passwords in the LM hash format. This format is really only for use with Windows 3.11 clients that do not support NTLM or Kerberos. The risk with creating and retaining this LM hash is that if an attacker managed to decrypt the passwords stored in this manner, he or she could reuse these passwords on other machines on the network.
  • Create Registry Value: NoDefaultExempt
    • Location: HKLM\System\CurrentControlSet\Services\IPSEC
    • Purpose: By default, IPSec will allow incoming traffic with a source port of 88 to query the IPSec service for information about connecting to the machine, regardless of the IPSec policies you put in place. By setting this value, no communication is allowed between ports except as allowed by the IPSec filters we set up, as described in the IPSec Policies section.
  • Create Registry Value: DisableIPSourceRouting
    • Location: HKLM\System\CurrentControlSet\Services\Tcpip\Parameters
    • Purpose: Prevents TCP packets from explicitly determining the route to the final destination, requiring the server to determine the best route. This is a layer of protection against man-in-the-middle attacks, in which the attacker routes packets through his or her servers, which sniff the contents as they pass through.
  • Create Registry Value: SynAttackProtect
    • Location: HKLM\System\CurrentControlSet\Services\Tcpip\Parameters
    • Purpose: This key protects the operating system from certain SYN-flood attacks by limiting the resources that are allocated to incoming requests. In other words, this helps block attempts to use SYN, or synchronization, requests between a client and sever for denial of service attacks.

In addition, although not directly related to preventing an attack, we enabled several audit logs to cover log-in and log-off events, account management, policy change, and system events. This helped us better monitor the servers during the contest.

IP Security Standards (IPSec) Policies

Beginning with Windows 2000, Microsoft has provided support to manage the authentication and encryption of Internet Protocol (IP) traffic using IP Security Standards (IPSec), an extension to the IPv4 protocol. Figure 1 below shows the default policy for the Server (Request Security) dialog box. We created a policy specifically for the contest.

Ff647142.openhack1(en-us,PandP.10).gif

Figure 1. Server (Request Security) dialog box

The IPSec rules were configured using the Local Security Settings Microsoft Management Console (MMC) snap-in (above). These policies played a major role in enforcing and securing allowed communications between the OpenHack servers. These rules enabled us to enforce the best practice of least privilege by:

  • Requiring that all traffic needed for running and administering the application be explicitly and uniquely specified in each system's IPSec policy.
  • Requiring communications between systems to be authenticated using certificates.
  • Requiring communications for administrative purposes to be both authenticated using certificates and encrypted.
  • Denying all traffic not expressly permitted for the application or administration of the system, including ICMP and IP traffic (the "default deny" rule).

IPSec rules have three major components: the filter which identifies traffic to be handled by IPSec, the action to take when such traffic is found by a filter, and the authentication mechanism used to establish a security association. If two systems trying to communicate do not have rules that identify the traffic as well as a common authentication mechanism between them, they cannot establish a connection.

The first step towards locking down the solution using IPSec was to fully understand the communication paths amongst the different systems so appropriate IPSec filters could be built. The Web server had to be allowed to communicate with the SQL Server database; the remote access server had to allow administrators to use a virtual private network (VPN) to access the management segment of the network; the management server had to grant VPN clients the ability to create Windows 2000 Terminal Services client sessions—these enable access to applications running on a remote machine's desktop—as well as to access and copy files to shares on the management server; all systems had to allow the management server to generate an administrative Terminal Services session to their private interface; and, finally, all systems had to have access to specific file shares on the management system. Once the connectivity required between the systems was mapped out on a per-port basis, we created IPSec filters on each individual system.

Next we had to determine how traffic was handled as it was identified by the filters on the system. For OpenHack 4, we defined four possible actions to take (known as "filter actions"):

  • Block the traffic.
  • Permit the traffic.
  • "Authenticate and Sign"—Authenticate the source of the traffic using certificates and establish a security association using packet signing.
  • "Authenticate, Sign, and Encrypt"—Authenticate the source of the traffic using certificates and establish a security association using encryption and packet signing.

The block rule simply drops the packet. This rule functioned as the "default deny" rule, which meant, "If we haven't expressly allowed the traffic, don't allow it." The permit rule lets the traffic to come through regardless of source. This was used to allow public access to the Web application.

While authenticating traffic using certificates required us to generate and distribute IPSec certificates from a common Certificate Authority (CA), it added a significant amount of integrity to the systems' ability to communicate securely. It should be noted that we used a stand-alone CA. After all the certificates had been granted, the CA was removed from the network. If the CA is no longer needed at production time, by all means follow this approach—it is another great way to reduce the surface area of the solution.

Using IPSec certificates, we were able to ensure the identity of source and destination systems, including remote administrators accessing the remote access server. By configuring the policies so all transmissions were signed using a SHA1 hash, we ensured that packets could not be successfully modified by attackers while in transit between back-end systems.

We encrypted management server communications using the MD5 encryption algorithm. This way, even if attackers had been able to breach the security on one of the Internet-facing systems, they would be unable to eavesdrop on the private network's traffic. This allowed administrators to safely connect to the live Web site to perform application updates.

IPSec processes rules with the most specific rules taking the highest precedence. Therefore, every system started with the following two rules:

  • Block all IP traffic.
  • Block all ICMP traffic.

We then built the rules specific to each system. The communications between the Web server and the database server were given an "Authenticate and Sign" filter action; communications with the management server were given an "Authenticate, Sign, and Encrypt" filter action; and public access to the Web site was set to permit access.

Our OpenHack 4 application's logical connectivity as established with IPSec is shown below.

Ff647142.openhack2(en-us,PandP.10).gif

Figure 2. The application's logical connectivity using IPSec

Remote Management & Monitoring

Part of the requirements for OpenHack 4 was to be able to update the application while the contest was underway. We accomplished this using Layer 2 Tunneling Protocol (L2TP) to create our VPN, Terminal Services, and restricted file shares.

Ff647142.openhack3(en-us,PandP.10).gif

Figure 3. L2TP used to create VPN (Terminal Services)

First, L2TP requires IPSec certificates in order to establish a connection. We configured several remote administrator machines with appropriate certificates. We then created remote access-enabled accounts for the remote administrators.

In order for an administrator to establish a VPN connection, he or she must have both the IPSec certificate installed on the system as well as the remote access account credentials. Briefly, an IPSec certificate embeds the private portion of the certificate in the local machine's certificates store in a non-exportable form. This means that the certificate cannot be made portable and used on another system. In effect, we were able to ensure that administrators could only use the VPN client account from permitted remote administrative workstations, minimizing administrative access to the solution.

Once the L2TP session was authenticated, the administrator workstation was granted an IP address on the management network. After establishing a VPN tunnel to the management network, administrators could open up a Terminal Services session to the management server, OHTS, as well as use the "inbox" and "outbox"file shares on the management server to drop off changed site content or retrieve files for analysis. All systems were stand-alone (i.e., not part of a domain), so share access and Terminal Services sessions were configured to use local accounts on the systems with strong, non-obvious passwords, as explained in the Passwords section. The shares used were restricted to allow only read operations from the "outbox" and write operations to the "inbox".

The bulk of administration took place from the management server Terminal Services session. From this session, the administrators would connect to any other systems' remote administration Terminal Services session, essentially "nesting" Terminal Services sessions. They could then connect to the "inbox" and "outbox" shares on the management server and drop or retrieve files as required from the systems being serviced. All traffic supporting these administrative functions required the use of IPSec, as explained above.

SQL Server 2000

The OpenHack SQL Server 2000 database was run on a dedicated machine as a measure of "defense in depth." Even if the Web tier were cracked, the database and all of the information it contained would remain isolated and protected.

As mentioned earlier, our solution used integrated Windows authentication to connect to the database. This is a good practice to follow as it eliminates the need to develop and securely store a password for accessing the database.

For backwards compatibility, Windows 2000 and Windows XP support a few types of authentication protocols. Because our database server is only accessible by machines implementing NTLMv2 authentication, it's strongly recommended to change the LAN Manager authentication level to NTLMv2 only. Note that with additional configuration, Windows 95, Windows 98, and Windows NT Server 4.0 with Service Pack 4 and above can also support NTLMv2. By constraining the number of authentication protocols supported, administrators minimize surface area exposed to attackers.

Ff647142.openhack4(en-us,PandP.10).gif

Figure 4. Setting the LAN Manager authentication level

With SQL Server as with Windows, we were careful only to install, configure, and run necessary services in order to constrain the surface area of the database open to potential attack. For OpenHack, we did not install the Upgrade Tools, Debug Symbols, Replication Support, Book Online, or the Dev Tools components.

The installation was done on an NTFS partition since this allowed for extra ACL-based security of the files and folders that SQL Server uses. The next step, always critical, was to install SQL Server 2000 Service Pack 2 and all of the latest patches.

It is quite common to find SQL Server installations where the service account is localSystem. While in a very well locked-down, private network this may be acceptable, it still carries a lot more privileges than the SQL Server service really needs, being an administrative account on the underlying machine. If there is a requirement for the service account to have access to network resources—for example, when backing up to a network drive, when using log shipping, or when using replication support—then choosing a low-privileged domain account is a good idea. However, if your environment does not require these features, choosing a low-privileged local account would work just as well. For the purposes of this competition, since we were not using these features and in keeping with the principle of "least privilege," we used a local user account.

We created a new NT local user account with the following settings:

  • Created a very strong password, as described in the Passwords section.
  • Removed the ability for the user to change his or her password.
  • Removed Terminal Services access.

After creating the new user account, we used SQL Server Enterprise Manager to change the start-up service account information, forcing the database service to run as this user.

Ff647142.openhack5(en-us,PandP.10).gif

Figure 5. Changing the start-up service account information

Keeping with the philosophy and security best practice of running only services that are required, we used the Services MMC snap-in to stop the Distributed Transaction Coordinator (MSDTC) service and set it to manual, as the OpenHack database did not run transactions nor did the server itself run COM+ applications. Here we see another advantage of running the database server on a dedicated machine: greater ability to reduce the surrounding surface area than would be afforded were the server running side-by-side with other servers and services.

We were able to further reduce the surface area by disabling the SQL Server Agent and Microsoft Search services, as our database solution did not require this functionality.

As a next step, and really more along the lines of reliability than security, we brought up the properties for the Microsoft SQL Server service itself, and changed the recovery actions to restart the service after a failure. This was done to keep the down time to a minimum in case of a failure.

Ff647142.openhack6(en-us,PandP.10).gif

Figure 6. Changing the recovery actions to restart the service after a failure

We then brought up the Server Network utility and changed the Network properties to hide SQL Server from direct client broadcasts. We also removed the Named Pipes protocol since we only required TCP/IP.

As part of this configuration, we went back and set a very strong password for the SA account. This is recommended even when running in Windows Authentication mode—if, at a later time, the authentication mode is switched to mixed mode, through the Enterprise Manager tool or directly through the registry, you want to ensure that the system is secure even if the administrator forgets to set the SA password at the time. It is always good to plan for the worst-case scenario in this manner.

We changed the default log-in auditing setting to Failure. This would write to the error log and event log all failed log-in attempts into the SQL Server database—information that might prove useful in identifying attempts to hack into the database.

Next, we removed the default Northwind and Pubs databases as a part of our effort to reduce surface area for potential attacks.

When all of these steps were complete, we created the Awards database used in the final solution. We then went through the tables and stored procedures and ensured that the account associated with the application only had execute permissions on the stored procedures, and did not have any permissions on the actual tables themselves. This allowed us to control access and restrict actions to the stored procedures and not worry about ad hoc SQL queries being run directly against the tables. Furthermore, we ensured that this account did not have any other specific privileges or permissions within SQL Server.

Passwords

A critical step in securing any server is selecting long, complex passwords that cannot be easily guessed. Ideally, a good password should include characters from at least three of the four following groups: lowercase 'a' through 'z', uppercase 'A' through 'Z', the numbers 0 through 9, and non-alphanumerics (e.g. '>,' '*,' '&,' etc.). For maximum security, the password should be composed of characters from all four groups as well as from characters generated using the ALT key. By creating passwords from these sets that are at least eight characters long, you minimize the chances that an attacker will be able to deduce your log-in credentials. This is the approach we took with each of the servers in our OpenHack solution and one that we highly recommend.

Conclusion

Not all of the steps we took to secure the OpenHack solution will apply to every Web solution. Nor do these steps represent the full range of measures developers and administrators should take in securing their own solutions. Every project is unique and will require work on the part of both the developers and the administrators to figure out the potential vectors of attack and how to guard against them. That said, the recommendations above proved invaluable to us in OpenHack 4. Even if they do not all apply directly to your solution, there are certain key best practices that should be abstracted from this and applied in one form or another whenever it comes time to build a secure solution:

  1. Plan for security in the original design. This includes developing processes to keep up with the latest Service Packs and patches.
  2. Always install the latest Service Packs and patches.
  3. Always use complex, non-obvious passwords.
  4. Reduce the surface area exposed to attack by turning of all unnecessary functionality.
  5. Adhere to the principal of "least privilege." Never grant more privileges than are absolutely necessary.
  6. Anticipate failures and always practice "defense in depth," to minimize their impact.
  7. When using IIS, run the IIS Lockdown tool and URLScan.
  8. Validate all input data.
  9. Use parameterized stored procedures instead of generating dynamic queries on the database.

For More Information

Show:
© 2014 Microsoft