Client-side Cross-domain Security
As of December 2011, this topic has been archived. As a result, it is no longer actively maintained. For more information, see Archived Content. For information, recommendations, and guidance regarding the current version of Internet Explorer, see Internet Explorer Developer Center.
Sunava Dutta, Program Manager, AJAX, Windows Internet Explorer
Summary: Exploring cross-domain threats and use cases, security principles for cross-origin requests, and finally, weighing the risks for developers to enhance cross-domain access from web applications running in the browser.
Section 1: Introduction
Section 2: Common Cross-Domain Attacks
Section 3: Scenarios in Cross Domain Today
Section 4: Secure Design Principles
Section 5: Security Concerns with Web API WG Proposal on Cross-Domain XMLHttpRequest
Section 6: Conclusion
Section 7: FAQs
As AJAX applications grow in popularity and power, one of the most significant limitations is the same-origin policy used by browsers to prevent cross-domain attacks. In this paper, we will explore cross-domain threats, enumerate common cross-domain use cases, talk about security principles that a cross-origin request should respect, and finally, weigh the risks of various techniques to enhance cross-domain access from web applications running in the browser.
To properly evaluate the risks of any cross-domain changes, it is important to understand the threats to web applications in the current model. Following are definitions of web attacks that will frame the rest of the paper.
Cross-Site Request Forgery (CSRF) is an attack that tricks the victim into loading a page that contains a malicious request. It is malicious in the sense that it inherits the identity and privileges of the victim to perform an undesired function on the victim's behalf, like change the victim's e-mail address, home address, or password, or purchase something. CSRF attacks generally target functions that cause a state change on the server but can also be used to access sensitive data.
For most sites, browsers will automatically include with such requests any credentials associated with the site, such as the user's session cookie, basic authorization credentials, IP address, Windows domain credentials, etc. Therefore, if the user is currently authenticated to the site, the site will have no way to distinguish this from a legitimate user request.
In this way, the attacker can make the victim perform actions that they didn't intend to, such as logout, purchase item, change account information, retrieve account information, or any other function provided by the vulnerable website. – Cross-Site Request Forgery, Open Web Application Security Project
For example, there was a 2007 CSRF attack against Google , where a Google Mail user's contact list could be stolen by an evil site if the user was logged on to Google Mail at the time of the attack. Google Mail checked the request's cookie to return the correct user's contact list, but did not validate that the requesting page was authorized to receive the response. In this way, the attacker's site was able to steal data from Google Mail; while it never had direct access to the user's Google Mail credentials, it was able to use CSRF to force the user's credential to be sent, resulting in a leak of the Google Mail information.
A cross-site scripting attack exploits the trust a user places in a website, making it a common vector for phishing and related attacks. Cross-site scripting occurs in two basic forms; there's reflected cross-site scripting (first order), which occurs when an attacker can embed script in data rendered immediately to the victim as part of a GET or POST request. Then there's stored cross-site scripting (second order), in which the attacker supplied script is retained in long-term storage before being rendered to the victim. Reflected cross-site scripting tends to be easier to detect and exploit, though it requires more direct victim interaction, making the attack less reliable. Stored cross-site scripting is often more difficult to detect and exploit, though the attack is more reliable because it typically occurs without any victim interaction.
Most cross-site scripting attacks attempt to hijack the victim's session key and smuggle it out by embedding it in an image URL, or similar link. To combat this particular attack Microsoft introduced a special HTTP-only flag for cookies in Internet Explorer 6 SP1. The server can explicitly set a cookie as HTTP-only and client script in IE6 SP1 or above will be unable to access it. (By default, cookies are scriptable as normal.) While that approach does complicate the exploit process, it doesn't prevent an attacker from simply scripting all the operations they choose to perform and executing them in the victim's context (effectively turning the attack into a combination of XSS and XSRF). – Same-Origin Policy Part1: Why we're stuck with things like XSS and XSRD/CSRF, The Art of Software Security Assessment
Cross-site scripting attacks are the most commonly reported Web security vulnerability today. There are various approaches to mitigate cross-site scripting attacks, including server or client sanitization or filtering, "safe subset" scripting languages, and so forth. When handling untrusted data from other domains, it is important that proper diligence is exercised to ensure that the data provided is not used to execute a script injection attack in the caller's context.
DNS Rebinding is an attack on the insecure binding between DNS hostnames and network addresses. During a DNS rebinding attack, an attacker will manipulate DNS records for a site he controls (e.g., *.evil.com) such that at some times the hostname points at a server under his control, and at others, the hostname points at a victim server or device.
In this way, the attacker is able to bypass the same-origin-policy restriction because both the victim and the attacker have the same hostname (at different points in time). This attack technique can enable firewall circumvention, because a victim server behind an organizational firewall is reachable by a browser within the same organizational firewall.
Strengthening the client's binding between a DNS hostname and the network address (e.g., pinning) has been proposed as a mitigation, but such a change may lead to application compatibility problems (e.g., with CDNs, load-balancing, etc). Servers can help mitigate the threat of DNS rebinding by using HTTPS and verifying the HOST header on inbound requests.
A good explanation of DNS rebinding can be found here: http://christ1an.blogspot.com/2007/07/dns-pinning-explained.html.
Any security mechanism that relies upon multiple requests (e.g., request permission, then request resource) must be hardened against DNS rebinding to help mitigate a Time-of-Check, Time-of-Use attack.
Time of check/Time of Use (TOC/TOU) attacks occur in requests where principals or permissions have changed between the time of permission checking and the time of actual use of the permissions.
In the event of a DNS rebinding attack, the actual principal identity of the server may change, enabling permissions granted by one server (the attacker) to be used against another server (the victim).
In another form of TOC/TOU attack, consider the following case. The client obtains permissions against a server, but the server subsequently is reconfigured to change permissions. The cached permissions may be illegally reused against the server unless the client rechecks permissions.
Any cross-domain approach that uses multiple request permission check and usage must weigh the performance and security implications of cached permissions.
Wildcarding attacks occur when access controls are set in error and allow for unintended access. For example, if access control rules are set to *.com, any .com site can access the resource. While such an attack is clearly enabled by a configuration error by the service provider, there are numerous examples of this in the wild today. Such mistakes can occur when developers switch responsibilities, as sites are merged, due to simple typographical errors, and numerous other reasons. As access-control rules become more complex, the likelihood of configuration errors increases. For example, major sites have suffered exploits in the past where access control rules were incorrectly set.
These are the scenarios that developers can be expected to address using cross-origin requests. Depending on the Web application, scenarios may be of different degrees of importance to the Web developer, therefore, the following list below is not in order of priority.
- Fetching and Posting Resources anonymously across sites:
- Description: If you have a Web site that fetches resources (e.g., Craigslist postings under ‘Cars less than $5000' in RDF format) from a different site to extract data from the response, a client side cross-domain feature could be used to fetch them using a single request if the resource (here Craigslist) enables cross-site access.
- Enabling this scenario would require cross-domain support for GET and POST HTTP methods (or an equivalent), and browsers should enable data returned across domains to be accessible to callers.
- Fetching and Posting Resources requiring user credentials:
- Description: If you are preparing your tax returns on a site that currently fetches all your tax documents from different employers and financial institutions, it can use a client side cross-domain feature to request and send your credentials to the different companies requesting the data.
- Enabling this scenario requires some sort of user identifiable information to be sent, such as cookies or credentials, so that user-sensitive data can be returned across domains.
- Fetching and Posting Resources requiring restricted access based on origin
- Description: If you have a site that has ratings of restaurants that requesting third-party domains can access provided they are members, the site would need to implement a mechanism for allowing or denying cross domain requests based on the originating domain. A cross domain solution here that enables client side access based on a ‘policy' or list of allowable domains would solve this scenario.
- Enabling this scenario requires access control list and set of rules should be maintained by the service provider.
- Supporting cross domain RESTful Services:
- Description: If a site (say Windows Live Mail) implements a simple REST API to create, delete, and modify resources across domain solution, it could be used to let an editing application on site store the results of the editing actions on the requesting site (Windows Live Mail).
- Should be able to send REST related HTTP Verbs cross domain in the minimum, arbitrary verbs in the worst case.
- Supporting cross-domain services with arbitrary headers:
- Description: A web service can use a Simple Object Access Protocol (SOAP) Action header (a subject of much controversy as to its purpose!) cross domain that would allow servers, such as firewalls, to appropriately filter SOAP request messages in HTTP.
- Need to be able to allow script to send headers (arbitrary or otherwise) across domains.
- Combination of all the above client side cross-domain features:
- Description: A site can use a combination of these cross-domain features to enable powerful services. For example, your financial institution can maintain a list of all the tax preparation sites that can access it. The user credentials are also sent and access is granted if the requesting tax preparation site is in the allowed list and the user's credentials are valid. If this is the case, then the requesting tax preparation site can delete the account or perform a complex transaction (edit details of the user's other account) if the user requests it. This service would leverage RESTful APIs and require a cross-domain authentication system as well as a cross domain list of partner sites that can request the data.
- May require one or more of the following:
- Support for HTTP Methods including but not limited to GET and POST
- A Mechanism to enable access control to originating domains
- A mechanism to send user credentials, cookies or identifiable information
- Support of arbitrary headers across domains
"Secure by design, in software engineering, means that the software has been designed from the ground up to be secure. Malicious practices are assumed, and care is taken to minimize impact when a security vulnerability is discovered. For instance, when dealing with user input, when the user has to type his or her name, and that name is then used elsewhere in the program, care must be taken that when a user enters a blank name, the program does not break." – Secure by Design, Wikipedia
Secure design principles are key to ensuring that users, whether the end-user or service provider, are protected. The increasingly hostile Web and ever more clever attackers lead to the proliferation of new vectors like XSS and CSRF. In the Web of today, it is critical that solutions be secure-by-design prior to release. This does not guarantee that there will be no exploits; however it does ensure that the bug trail is significantly lower and goes a long way toward protecting the user. For more details on this, please read our MSDN article on The Trustworthy Computing Security Development Life Cycle.
Cross-site XMLHttpRequest is essentially a combination of a cross-domain access mechanism, Access Control (AC), and an object to enable this mechanism, in this case, a versioned XMLHttpRequest object called XMLHttpRequest Level 2 (XHR). This cross-domain implementation will be referred to as CS-XHR.
NoteThis paper is based on the AC and XHR level 2 draft on June 3, 2008.
XDomainRequest (XDR) is the new object that we designed for cross domain using a "clean room" approach, one where we start with strict security principles and a "clean slate" and add functionality only if it meets those principles.
Your help and support in securing the CS-XHR design will go a long way towards ensuring that developer concerns are addressed and both service providers and users are safe.
Here is a list of security principles we believe are critical to developing a secure client side cross-domain solution. Keep in mind that any solution should have defense-in-depth to ensure that all of these principles are robustly respected, even in the face of developer implementation flaws or service provider misconfiguration. More on how attacks can occur if these principles are violated is outlined in Section 5.
"Secure by Design: the software should be architected, designed, and implemented so as to protect itself and the information it processes, and to resist attacks." – Secure Development Lifecycle Overview Principles
Protect existing sites that rely on the Same Origin Policy for cross-domain defense. Legacy servers today do not expect cross domain requests other than what is possible through HTML Forms as defined in the HTML 4 specification. This means today only GETs and POSTs are allowed and expected cross domain. If a cross-domain request is sent with other HTTP verbs, arbitrary headers, or cookies, services may assume that these are being sent from the same origin by XMLHttpRequest (the only object that allows this, which as currently implemented, is restricted to the same site). The challenge here is that these unexpected HTTP semantics sent cross domain have the potential to be interpreted as same-site requests. To make things worse, if the cross-domain solution is compromised, it can lead to arbitrary access and actions on the user's behalf.
"When the victim visits the malicious SWF file, the above 6 steps will silently execute in the background. At that moment the attacker will have control over the service the port forwarding rule was assigned for. Keep in mind that no XSS is required; it is a matter of visiting the wrong resource at the wrong time."
"Also, keep in mind that 99% of home routers are vulnerable to this attack as all of them support UPnP to one degree or another." – Hacking the Interwebs, Gnucitizen.org
Make it secure by default and deployment by keeping it simple and easy to grasp (or provide an alternative for developers who are not security gurus). Having a design that starts with what is already is possible in the browser today and extends that while minimizing any compromises to the browser's security envelope will ensure that the proposal is secure by default. Having a rich complex proposal for cross-domain access that depends on many components and has multiples stages and behaviors in different modes leads itself to be vulnerable to cross-domain bugs and is unnecessary, especially for the developer who is not interested in some of the functionality. Having a light component that's easy to deploy and dedicated/designed from the ground up to solve a certain set of scenarios, will result in an easy security story and a short learning curve that can be implemented with minimal chances of errors.
"Secure by Default: in the real world, software will not achieve perfect security, so designers should assume that security flaws would be present. To minimize the harm that occurs when attackers target these remaining flaws, software's default state should promote security. For example, software should run with the least necessary privilege, and services and features that are not widely needed should be disabled by default or accessible only to a small population of users." – Secure Development Lifecycle Overview Principles
"Secure in Deployment: Tools and guidance should accompany software to help end users and/or administrators use it securely. Additionally, updates should be easy to deploy." – Secure Development Lifecycle Overview Principles
While the W3C Web API WG draft on Cross-Domain XMLHttpRequest (CS-XHR) addresses the full list of scenarios detailed in Section 3 and has had a lot of work put into it, there are still several concerns that the draft doesn't address, especially around the security principles. Specifically, CS-XHR does not:
- Avoid privilege escalation attacks by ensuring that the user's authority cannot be misused.
- Protect existing sites that reply on the Same Origin Policy for cross-domain defense.
- Make it secure by default (Or provide an alternative for developers who are not security gurus) and deployment by keeping it simple and easy to grasp.
For reference, XDR supports fetching and posting resources anonymously across domains. We focused on this important scenario because we felt that we could secure this in IE8 with confidence by respecting our security principles.
In this section, I'll demonstrate a few of these that could be critical blockers to implementation by browsers and security minded developers. Mozilla echoed our sentiments here by removing CS-XHR support from the Beta until the specification addressed further security concerns.
I've made recommendations where we can in the paper and to the WG to secure the scenarios that CS-XHR is addressing. In cases where we don't have a solution today, I've refrained from making a recommendation, instead focusing on best practices and additional restrictions that developers must perform to secure their code if they are using CS-XHR. Hopefully, efforts here will result in a more secure specification.
XHR has a history of bugs and extending it for cross-domain access does not build confidence.
Rather than working backwards to secure an object with a poor security record, it makes more sense to start from a basic architecture and adding functionality incrementally, securely, and only as necessary.
XHR has a poor security record across all the major browsers ranging from header spoofing attacks to re-direction attacks. Header spoofing attacks now are even more scary given that CS-XHR uses headers to determine which sites can access resources as well as what actions they can do (HTTP Verbs and headers).
"I was never a huge fan of overloading the XHR object to do this because it seems like there are just too many differences and security issues you'd have to lock down. IE's approach, making a completely different object, makes a lot of sense to me and quite logically locks down functionality that otherwise would be part of an if statement in the XHR code." – Nicholas C. Zakas, http://www.nczonline.net/blog/2008/4/27/cross_domain_xhr_removed_from_firefox_3
"Before you go thinking I'm all for cross-domain XHR, I'm not. Yet. The security implications of such an action need to be thought out. Carefully. My only point is that I've yet to think of a reason why the world's crackers are desperate to get their hands on cross-domain XHR." http://getahead.org/dwr/ajax/cross-domain-xhr
"It was possible to add illegal and malformed headers to an XMLHttpRequest. This could have been used to exploit server or proxy flaws from the user's machine, or to fool a server or proxy into thinking a single request was a stream of separate requests. The severity of this vulnerability depends on the value of servers which might be vulnerable to HTTP request smuggling and similar attacks, or which share an IP address (virtual hosting) with the attacker's page." http://www.mozilla.org/security/announce/2005/mfsa2005-58.html
"Secunia Research has discovered a vulnerability in Opera, which can be exploited by malicious people to steal content or to perform actions on other web sites with the privileges of the user. Normally, it should not be possible for the XMLHttpRequest object to access resources from outside the domain of which the object was opened. However, due to insufficient validation of server side redirects, it is possible to circumvent this restriction. The vulnerability has been confirmed in version 8.0." http://secunia.com/advisories/15008/
"Microsoft Internet Explorer XMLHttpRequest object request and response spoofing" http://securityvulns.com/Gnews179.html
"Available for: Mac OS X v10.3.9, Mac OS X Server v10.3.9, Mac OS X v10.4.9 or
later, Mac OS X Server v10.4.9 or later
Impact: Visiting a malicious website may allow cross-site requests
Description: An HTTP injection issue exists in XMLHttpRequest when serializing headers into an HTTP request. By enticing a user to visit a maliciously crafted web page, an attacker could conduct cross-site scripting attacks. This update addresses the issue by performing additional validation of header parameters. Credit to Richard Moore of Westpoint Ltd. for reporting this issue." http://m.phpmagazine.net/entry_1_6025.html
XHR Behaves Differently in Cross-Domain Mode and Same-Site Mode
XHR behaves differently in cross-domain mode and same-site mode leading to unnecessary confusion for the web developer by being the same API only in name.
XHR is a widely used object. Consequently, it is difficult to reverse engineer without breaking existing deployments, adding complexity, and confusing developers. In the process this may introduce new holes that require further patching. This different cross domain behavior means that it has all the disadvantages of XMLHttpRequest like its security flaws without any clear benefit. Having a new object here without these redundant cross domain properties like getAllResponseHeaders will mitigate a number of these worries.
For example, the following paraphrases some of our feedback to the editor of CS-XHR.
- The proposal modifies the expected behavior of the SetRequestHeader method, and the availability of the user and password parameters on the Open() method.
- The proposal requires that the HEADERS_RECEIVED state must either never be reached for a cross-origin request, or it must be delayed until any access control list in the entity is evaluated. Hence, eventing behaves differently when a request is cross-origin.
The proposal requires that getAllResponseHeaders() and getResponseHeader() should behave differently by not inappropriately exposing any trusted data of the response such as HTTP header data.
Access-Control Rules that Allow Wildcards
Requiring implementers to maintain access control rules that allow wildcards can lead to deployment errors.
- For access where AC is important, other architectures like Server Side Proxying for service providers who are interested in maintaining access control rules and the HTML 5.0's WG's Cross Document Messaging are recommended.
- If you are going to use CS-XHR, we recommend avoiding wildcards, auditing access control rules regularly, and avoiding hosting sensitive data from domains that expose data to CS-XHR.
Permitting the end user to decide whether the web application they're using should be able to make a cross-domain request may be worth investigating. There are significant user experience challenges because the user may not understand the implications of such access.
The service provider who sets the access permissions and returns the requested content is another key player here. Providing a simple scalable solution here will ensure that mistakes in permissions don't unravel as services are deployed and maintained. For example, Flash has an access control mechanism similar to the one in CS-XHR and this has been vulnerable to wildcarding attacks. Wildcarding attacks occur when access controls are set in error (a distinct possibility as the number of rules to filter cross domain requestors increases and becomes complex) and allow for unintended access. This is especially scary given that AC can send cookies and credentials in requests. This also violates the AC drafts requirement that it "should reduce the risk of inadvertently allowing access when it is not intended. That is, it should be clear to the content provider when access is granted and when it is not."
"10/10/06 Flash + JS + crossdomain.xml = phun
I was browsing Jeremiah Grossman's Blog and found an interesting post talking about a file named crossdomain.xml and extended uses of it in regards to cross site scripting. In a nutshell there's this file called crossdomain.xml used by flash to say 'I am www.domainb.com and I will allow users of www.domaina.com to make requests to me'. Unfortunately people are misconfiguring their crossdomain.xml file and allowing everybody." http://www.cgisecurity.com/2006/
"Any programmer who /understands/ these concepts should set their code to carefully
allow only certain sites access, and/or have generic levels of access to public
sites...but there a /lot/ of PHP-'users' who don't know half of what they entered
into an editor.
I would certainly /hope/ a bank wouldn't do something stupid like implement this carelessly, but if they did*, or some up-and-coming FaceBook-like site did it, some people could have a very bad day. I'm sure these factors were considered already, but I still find it troubling to be breaking down the walls of security present in current browsers, for the sake of Web 2.0." – John Resi. http://ejohn.org/blog/cross-site-XMLHttpRequest/
"As I've shown with FlashXMLHttpRequest,
you can use Flash to make arbitrary GET and POST requests to any domain that hosts
the proper crossdomain.xml file. Usually this file is posted in a domain that hosts
web services, to make them accessible from Flash. But if that domain also contains
some UI, another CSRF protection in this UI becomes useless.
Flickr was vulnerable to this exploit, because it hosted an "allow all" policy file in its main domains: flickr.com and www.flickr.com. We notified Flickr and they fixed the hole promptly by moving their APIs to a separate domain and removing the crossdomain.xml file on their main domain (now 404)." - Julien Couvreur, http://blog.monstuff.com/archives/000302.html
"I honestly thought that I'd covered all the ground I could on same-origin policy, but I just stumbled across the W3C draft on Access Control for Cross-site Requests. Apparently, this is implemented in the upcoming Firefox 3 and (given the author) can be expected to show up in Opera too. It seems this protocol's sole purpose is to allow cross-site XMLHttpRequests to get through the same-origin policy. I admit serious dismay that no one is taking the opportunity to shore things up a bit while making new holes (especially considering my previous thoughts on the subject). However, I understand that developers are probably clamoring for the chance to make shiny new AJAX mashups and widgets. That said, I cannot understand is how this particular solution is the best we can get. Here's a rough explanation of how the protocol works:
- For a cross-site GET, the request is issued, and the response is checked for access-control headers (or header directives in the document), which determine what requesting domains are allowed to make cross-site requests. If the requesting domain is allowed, the response is made available to the script; otherwise, it fails.
- A POST (or DELETE) is a bit different, and is handled in multiple steps
in order to prevent unwanted side effects:
- The browser issues a GET request for the desired URL with a Method-Check header listing the method of the request that will follow.
- The server responds with access-control headers telling it what methods are allowed or denied for a particular set of origins.
- If the origin and method combination are allowed, the browser issues the cross-site POST request.
So, I have to ask, what is the value in spreading the origin policy exceptions across the entire web site? I'd expect that it's going to make web-app security auditing a whole lot more complicated. I will preempt the argument that a policy file would expose site structure and cross-site relationships, as I'd maintain that information is already more than easy enough to get when spidering the site. Finally, doesn't anyone else care about shoring up all the existing cross-site stuff so we have a little more defense against things like XSS and XSRF? Because I can't see any way of Access Control for Cross-site Requests ever addressing the security problems we currently see every day." - Browser Security Gets Even More Confusing (Cross-site Request Draft)
Access-Control Rules Visible on the Client
Allowing Access Control Rules to be visible on the client leads to information disclosure.
- XDR ensures that servers regulate access to individual requests and that rules are not available to the client.
- Doing the evaluation server side will raise the bar on profiling the sites allow list.
- Server side proxying allows for sites to maintain a list of hidden partners and allowed sites.
The access control rules need not be exposed to the world as this information could potentially be sensitive. For example, your Bank may maintain a list of allowed partners based on your other frequently accessed bank accounts. Making these rules available on the client can lead to profiling attacks if this data is intercepted. While AC and XDR allow servers to use the Access-Control-Origin header to make access-control decisions preventing them from being viewed on the client, the reality is that in practice web developers are likely to opt in for what's easiest and will not leverage this given the alternative available for AC. While this has not been seen to be a prominent concern in existing deployments, this has been raised as a potential door for exploits by our security experts as there could be scenarios where the cross-domain file could be of interest to attackers as adoption increases.
Access-Control Rules in Headers
Sending Access Control Rules in Headers can lead to inadvertent access.
- Enable users to restrict site-to-site access. This has its own set of challenges that need to be investigated like UI.
- If you are using CS-XHR, we recommend not using it to send sensitive data so that if Access Control (AC) rules are compromised, the impact of the data disclosed is minimal. When AC rules are audited and maintained, if the rules are spoofed (a possibility because XHR has been subject to header spoofing attacks and AC rules are maintained in headers), the data may be compromised.
- The Web API Cross Site XMLHttpRequest plan allows access control rules to be in headers. It is especially dangerous given that XMLHttpRequest has had header spoofing attacks in the past on multiple browsers. This could cause cross domain access to legacy sites not opted in to cross domain or change access control rules for existing sites using CS-XHR.
- To make things even more confusing, an XML file and headers can be used to control access control in cross site XMLHttpRequest.
"(Description Provided by CVE) : Firefox before 1.0.7 and Mozilla Suite before 1.7.12 allows remote attackers to modify HTTP headers of XML HTTP requests via XMLHttpRequest, and possibly use the client to exploit vulnerabilities in servers or proxies, including HTTP request smuggling and HTTP request splitting." http://osvdb.org/osvdb/show/19645
"That the XDR proposal enables cross-domain requests with minimal complexity
and in a way which is unlikely to cause IT administrators to disable the feature,
is, in my opinion, reason enough to be enthusiastic. The XDR proposal seems like
something that could be a stable platform on which to start building new kinds of
I think the XDR proposal also gets some important deployment advantages from its avoidance of existing ambient authority mechanisms. Many web sites are composed of both public and private resources living inside the same URI namespace. For example, take a look at the structure of the W3C site. Both member only and public resources share the same URI namespace. Under XDR, the W3C could safely add a XDomainRequestAllowed header to all responses across the whole site. As a result, all the public resources become accessible through XDR, but the member-only resources remain protected, since XDR is unable to access or submit HTTP auth credentials. In contrast, detailed engineering work, and a corresponding security audit, would be required for the W3C to adopt the AC4CSR proposal; otherwise, the member-only resources would be vulnerable to XSRF attacks." Tyler Close, HP http://lists.w3.org/Archives/Public/public-webapi/2008Apr/0095.html
Maintaining Access Control Based on a Header
Maintaining Access Control based on a header that instructs the client to serve the response to a particular domain/path instead of an individual request leads to the potential for inadvertent access.
- Ensure proper and complete URL canonicalization if Access-Control is ever granted by path.
- Enforcing access control on a per-request basis. Do not permit policy from one URL to regulate access to another URL.
This can lead to vulnerabilities that occur when the path of the request can be modified by an attacker using special characters, a flaw that we pointed out to Mozilla on a teleconference on cross origin requests. A solution here is currently being discussed by the Web API WG (See right). Note the AC draft can be demonstrated to need the access control implementers to take additional security measures although this is against the draft's own requirement of "Must not require content authors or site maintainers to implement new or additional security protections to preserve their existing level of security protection." and "Must not introduce attack vectors to servers that are only protected only by a firewall."
"The policy file is usually placed in the document root of the web server with
the name crossdomain.xml, unless a different path is specified. When a request to
an external URL is made, first of all, flash requests the content of the policy
file at the external domain, and then (if the policy permits it) the user request
is made. By adding some special chars in the URL, it is possible to modify the path
of the URL request of the policy file.
Modifying the path of the request an attacker can perform GET requests to an arbitrary file on the web server (he can for example exploit CSRF vulnerability on a third web site)." http://seclists.org/fulldisclosure/2007/Nov/0245.html
"What I suggest is that we prohibit the Access-Control-Policy-Path header from being used on URIs that includes the string "..\", in escaped or unescaped form. One worry with this is if there are encodings which put the '.' or '\' characters to other code points than 2E and 5C respectively. I.e. would we need to forbid its use on URIs other than ones containing
That sounds like perpetuating a bad hack in a spec. I'd rather see us say -- in a note somewhere in the spec -- that servers will want to be careful, and will want to, e.g., configure their respective web application firewall to prevent this attack from occurring." http://lists.w3.org/Archives/Public/public-webapi/2008May/0435.html(.|%2e)(.|%2e)(\|%5c)
Sending Cookies and Credentials Cross Domain
The Access Control sends cookies and credentials cross domain in a way that increases the possibilities of information disclosure and unauthorized actions on the user's behalf.
- Preventing cookies and other credentials from being sent cross domain will help ensure that private data is not inadvertently leaked across domains.
- The HTML 5.0 feature called Cross Document Messaging, combined with the same-origin XMLHttpRequest, enables regulated cross-domain access on the client without requiring potentially dangerous functionality (e.g., cross-domain submission of headers).
- For down-level clients, server side proxying architectures will likely continue to be used by organizations handling sensitive data.
Future designs may include:
- The user could enter credentials while making a proper trust decision about whom ultimately gets the credentials and who this grants access to. Any user trust decision needs to be properly understood as there is the possibility that poor UI design or spoofing may lead to the user making the wrong decision. If done correctly this does provide the benefit of having the user's explicit assent and a number of existing software dialog warnings are currently based on this mechanism.
- The browser could send an XDomainRequestCookie header. This would allow cookies to be sent in a header with a new name, so that existing sites would not inadvertently get a cookie and assume that this is cross domain. Sites could then ignore this header and not take action based on the use's session identifier. Aware servers on the other hand could read the new header and provide useful, user-specific services based on its contents. This of course requires the server frameworks to need updates to look for such cookies and parse them properly. In addition, any intermediary proxy that behaves differently based on cookies would break, but these are issues that are definitely worth a further look.
- The web page for a hosted resource could include an authorization token that the user can drag and drop on a third-party Web page. This token authorizes a single request of a predetermined type. Often this user action will be required regardless of any security policy, since the third-party Web page will need to be told what resource it should send its request to. Both the authorization token and the resource identifier can be specified by the user in the same user interface gesture.
The way AC does these increases the potential for Cross-Site Request Forgeries as requests will be automatically authenticated and may contain headers otherwise impossible to send via script. For example, a user may be authenticated via CS-XHR to his or her bank from their online tax preparation site. If they subsequently visit an evil site, it could craft CS-XHR requests to the Bank Site and send a token to authorize actions. Even though CS-XHR requires an OPT-in model from the server (this is good), if there is an XSS vuln, AC header spoof, or wildcard accidently set, this opens up another channel for unwanted authenticated actions.
In addition, a number of sites may assume and rely on cookies being sent with cross-site requests and this could become a third party problem if cookies are sent by default. As the Web API WG members note, a large number of sites will not understand cookie authorization and will wind up susceptible to CSRF.
Privacy: Including the cookies lets sites more easily track users across domains.
- "sending cookies, by-default, with 'non-safe' requests.
- many of the risks that are associated with allowing cross-site XHR, e.g. Cross-Site Request Forgery, can be mitigated by not sending cookies with these requests.
- Jonas concerned that sites will assume and come to rely upon browsers not sending cookies with cross-site requests, which could lead to problems if we ever decide to start sending 3rd party cookies by default
- We should not send cookies and auth headers."
<Hixie> the reasons to include cookies are simple -- if we don't have them, we (Google)
basically can't use xhr.
. . . #[00:19] <sicking> so the thing is that CSRF today is kind of a catastrophe. There are lots and lots and lots of sites that are susceptible to it. If we had a world where cookies weren't sent for third-party requests we'd be in a much safer web
. . .
# [00:21] <sicking> jruderman, my point is that clearly the technologies we have today are too complex, so the argument "it's no more complex than what we have today" is a bad argument
. . .
# [00:24] <Hixie> sicking: Google similarly redirects all ad clicks through its servers (though in this case not for user tracking purposes, but that's only because we avoid that kind of behaviour)
. . .
# [00:34] <Hixie> I think the idea of blocking third party cookies is archaic and paranoid, and makes people feel safe when they should be realising that they are being tracked
. . .
# [00:41] <Hixie> dump the pref, move on, tell the people who complain that they are being tracked whether they send cookies or not, and that they should find better ways to anonymise themselves (e.g. block cookies to all sites except those they enable, and use tor as their network)
. . .
# [00:47] <sicking> so anyhow, back to Access-Control and cross site XMLHttpRequest. So the worry was that we'd end up with a bunch of sites having CSRF issues because they don't understand that cookie!=authorization
. . .
# [00:57] <othermaciej> cookie preferences and restrictions are only useful for experts who are at the extreme of caring about privacy"
"You do realise that with XDR, 'resource host' has no means to authenticate the
user using (relatively secure) HTTP digest authentication?
I think the history of HTML has taught us that if people want to do something (e.g. styling), and you do not provide the means, they will abuse other mechanisms (tables) to achieve their goals. I can assure you people will work around the limitations of XDR in the same manner. The least we can do is provide a mechanism that lets the user do what he wants, yet is easy to control and secure.
I agree with the goal stated in the last sentence above and it is a significant part of my rationale for opposing the use of ambient authority. Ambient authority, as implemented by cookies and HTTP auth, is hard to control and secure, especially when user requests are created in collaboration with a third party, such as is the intended case with cross-domain browser requests. The attacks linked to above demonstrate some of these problems. In contrast, I think explicit authorization tokens can feasibly be controlled and used in a secure way, such as described in the example above." - http://lists.w3.org/Archives/Public/public-webapi/2008Apr/0095.html
"Cross-Site XHR has been removed due to concerns for spec stability as well as wanting to attempt to make the security model for cross-site loading of private data better." - https://bugzilla.mozilla.org/show_bug.cgi?id=424923#c14
Sending Arbitrary Headers Cross Domain
Sending arbitrary headers cross domain breaks a lot of assumptions that sites today may make, opening them up for exploits. Creating complex rules to limit the headers sent cross domain makes the spec even more difficult to deploy reliably.
Do not allow arbitrary headers to be sent cross domain. Avoid any design where the list of blocked and allowed headers is likely to be confusing and under constant revision as new attacks and interactions arise.
If you are implementing CS-XHR, we advise you take extreme caution in what headers you allow in the OPTIONS request, in addition to testing the allow list when opening up your service cross domain. Furthermore, we recommend taking extra caution by ensuring that the headers do not specific actions that are dangerous if the request is compromised by a DNS-Rebinding attack.
In general, browsers today can not send cross-domain GET/HEAD requests with arbitrary headers. With AC, this now becomes possible, breaking many previous assumptions. Microsoft is aware of sites dependent on the expectation that arbitrary headers cannot be sent cross domain and this is in accordance with HTML 4.0. This is not a good security practice by any means but enabling this functionality in a way that compromises our users is not an option. As an example, UPnP allows GET requests with a SOAP Action header to perform actions on a device. If the SOAP Action header is not actively blocked by a cross-site XMLHTTPRequest client, attackers will be able to perform this attack against routers and other UPnP devices. Contrast this with XDR, where arbitrary headers cannot be supplied, by default.
An option here is to create a block list of bad headers. However, this quickly adds to the complexity of this already complex proposal and to make things worse will need continual updates to the spec once implementations have shipped and more blacklisted headers are discovered. This will presumably prevent the spec from stabilizing and browsers will have to update patches to secure their implementations.
This is a lower concern but having an allow list would be another option. That said, since web sites today do rely on not allowing arbitrary headers across domain it is difficult to prove that the headers on the allow list are not being used by sites for Same Site Origin requests.
To make things even more complicated, the AC spec specifies a complicated mix of allow lists, black lists, and other headers. For example, if a header is not in an allow list, it needs a pre-flight check. (The spec already requires pre-flight checks for non-GET HTTP verbs). This of course is another addition to the multi-part request that AC allows and if the server agrees there's still a blacklist to filter out headers that should not be allowed. The convoluted approach continues with XMLHttpRequest level 2 having its own set of blacklists that filter headers out prior to cross domain. Moving on, this black list in XMLHttpRequest has a SHOULD not and MUST not specification for blocked headers, leaving the door open for different behaviors across browsers.
Header spoofing in XMLHttpRequest is a common vulnerability from the past. Sending headers cross domain may allow for access control rules to be changed, enabling legacy services not opting in to Cross Site XMLHttpRequest to be vulnerable.
"On the simplicity side, XDR is appropriately simple (roughly as simple as JSON Request), whereas Access Control has incrementally added complexity (syntax rules for allowing/denying domains, two-step dance for POST requests, detailed lists of headers that are transmitted) to the point that it is now a small beast." - http://lists.w3.org/Archives/Public/public-webapi/2008Apr/0099.html
"> On Wed, 14 May 2008, Bjoern Hoehrmann wrote:
>> Note that there are more headers on the list than the ones listed above,
>> specifically Proxy-*, Sec-*, and it is unclear how to handle, say, the
>> Cookie and Authorization header.
> I think I would lump the Cookie, Cookie2, and Authorization headers in
> same bucket as, e.g., Host -- these are headers that the UA should be
> setting and not headers that should be under author control.
Agreed, I added these.
> Incidentally, I think I would recommend removing the blacklist from AC,
> since AC has a whitelist. Having both seems pointless.
Access Control for Cross-Site Requests does actually allow arbitrary
headers in the request, though a preflight request is required if they are
not in the whitelist. Therefore it is important that the blacklist is
still there to filter out all headers that should not be allowed even if
the server agrees. (Arguably this blacklist is not relevant in the
XMLHttpRequest case because there those headers are filtered at an earlier
"It was possible to add illegal and malformed headers to an XMLHttpRequest. This
could have been used to exploit server or proxy flaws from the user's machine, or
to fool a server or proxy into thinking a single request was a stream of separate
requests. The severity of this vulnerability depends on the value of servers which
might be vulnerable to HTTP request smuggling and similar attacks, or which share
an IP address (virtual hosting) with the attacker's page.
For users connecting to the web through a proxy this flaw could be used to bypass the same-origin restriction on XMLHttpRequests by fooling the proxy into handling a single request as multiple pipe-lined requests directed at arbitrary hosts. This could be used, for example, to read files on intranet servers behind a firewall." - http://www.mozilla.org/security/announce/2005/mfsa2005-58.html#xmlhttp
"After reading the great post, I must say, "Hacking
the Interwebs" by the GNUCitizen team, I thought that it would be a waste not
to try and find a way of attacking UPnP without the Flash requirement.
Basically, what needs to be achieved in order to attack the device through UPnP over HTTP is to:
- Be able to send a "POST" request to the device's IP address.
- Be able to set the "SOAPAction" header of the "POST" request.
If we'll disregard that the device might have XSS vulnerabilities, another way of breaking the same origin policy is DNS pinning.
I was about to start and investigate whether XmlHttpRequest and DNS pinning can be used to attack UPnP enabled devices, just to find out that someone else has already done this research. And this was done almost a year ago!" - http://aviv.raffon.net/2008/01/15/HackingTheInterwebsFlashless.aspx
Allowing Arbitrary HTTP Verbs
Allowing arbitrary HTTP verbs to be sent cross domain may allow unauthorized actions on the server. Creating complex rules to secure this opens up the possibility for other types of attacks.
- Do not allow non-GET and POST verbs. This is in line with capabilities of HTML forms today and is specified by the HTML 4.
- If verbs are sent cross domain, pin the OPTIONS request for non-GET verbs to the IP address of subsequent requests. This will be a first step toward mitigating DNS Rebinding and TOCTOU attacks.
- Using XMLHttpRequest to do this is inherently more complicated as XHR has its own rules for blocking verbs.
- Server Side Proxying is a safer way to doing these scenarios.
Sites today do not expect HTTP verbs other than GET and POST (what's allowed in HTML 4.01 forms today) to be sent cross domain. AC tries to solve this by requiring that any non GET verbs require one OPTIONS request from the browser and one response from the server to unlock cross domain sending of ALL verbs to the domain. This decision may also be cached on the client for future requests.
There are also redirection cases that require new checks.
This is further complicated by XMLHttpRequest rules on what HTTP verbs are blocked by default like TRACE and TRACK. This is especially scary given that user-sensitive information can be transmitted in CS-XHR using cookies and credentials and any compromise here will lead to actions (specified by the VERBS) on behalf of the user (authority is specified by the cookies and credentials).
This multistage handshake in the case of non-GET requests opens the possibility for attacks like DNS Rebinding and Time of Check, Time of Use attacks where a change occurs in between consecutive attacks. In the case of non-GET requests, DNS Rebinding mitigation depends on the server actively validating the HOST header. For CSRF attack scenarios using non-GET verbs, victim devices across the network are likely not going to do this as the victim server is not intending to be part of a Cross-Site XMLHTTPRequest transaction. This violates the Web API AC drafts own specification of "Must not require content authors or site maintainers to implement new or additional security protections to preserve their existing level of security protection."
"I still think that the vector of attacks you cite are open, difficult problems
that exist outside the scope of AC. I mean, standard requests for HTML documents
often are multipart requests today on the web, and thus are prone to similar attack
vectors (in your example, replace the notion of XS-XHR with requests for pages and
add "attacker can insert themselves into the stream..."). But thinking about economizing
on headers or connections in general is a good thing; I'm just not sure I have a
straw person yet as to where this can be done (right now I'm grasping at straws
like Keep-Alive :-) ).
Thank you for sharing your security concern here. I worry that this will become a bit of an intractable debate about direction, but the concerns are definitely worth thinking about, and I'm glad we're airing them." - http://lists.w3.org/Archives/Public/public-webapi/2008May/0350.html
"That is the big problem with XDR's restrictions. Well, aside from its breaking
of REST by disallowing PUT and DELETE and setting the Content-Type and Accept-*
headers, while favouring SOAP.
"I characterize the web-apps that I develop as being RESTful, and don't see any compelling value proposition in the various SOAP related specifications. The XDR proposal adequately supports all of the programming patterns that I find useful in a RESTful web browser application. This outcome doesn't seem to be accidental, but rather seems to be the result of the IE Team's approach of modeling their proposal off the de facto security policy defined by HTML 4. The prohibition against HTTP methods other than GET and POST, as well as the limitations on HTTP headers, do not originate with the XDR proposal, but rather are a carryover from the HTML 4 specification. I doubt the authors of the HTML specification intended to be creating a security policy when they specified the limitations upon the FORM element, but that is in effect what they were doing. The limitation on the FORM's method attribute to the values of "get|post" has become a security policy relied upon by Web resources. The same is true of the use of HTTP headers. We have all been building our web applications within these constraints, for as long as there has been a Web. The XDR proposal does not introduce any new limitations that we must abide by in creating web applications, and so cannot be said to break anything." - http://lists.w3.org/Archives/Public/public-webapi/2008Apr/0095.html
"My own opinion is that the bulk of the power of the RESTful approach comes from
the ability to define a custom URI namespace, do POSTs, and GETs with caching. These
things are supported by the XDR proposal.
Authors are encouraged to check the Origin HTTP header, especially for non-GET requests, to ensure that in case of policy change they do not inadvertently allow access due to race conditions (when such access should be denied).
In addition to checking the Origin HTTP header authors SHOULD also check the Host HTTP header and make sure the host name provided by that header matches the host name of their server. This will provide protection against DNS rebinding attacks." - http://dev.w3.org/2006/waf/access-control/
Cross-domain attacks are on the increase, and educating developers and implementers by recommending best practices will go a long way toward reducing XSS, CSRF, and other common attacks. We're working with the Web API WG as well as other organizations to exchange thoughts and secure design patterns. We'd like to hear your voice on ways to improve security in both XDR and CS-XHR.
For feedback on CS-XHR, please join the public mailing list at http://www.w3.org/2006/webapi/
For feedback on XDR and or blog posts on the topic, feel free to contact email@example.com.
These are the concerns that have been raised regarding XDR that we'd like to address.
- XDR may allow POSTs of arbitrary content to intranet servers, without
This is currently not possible with XDR in IE8. The XDR design does not require zone awareness; this should be considered an optional part of the specification. Zones are used as an attack surface reduction in IE as they are supported by the IE/Windows platform. Note that XDR supports only the GET and POST methods, DELETE and other methods are not supported. In addition, XDR is intended for "public" data. We explicitly suggest that Intranet servers do not expose private data through this mechanism. To ensure that no existing servers/services (in any zone) are put at risk, XDR does not send credentials of any sort, and requires that the server acknowledge the cross-domain nature of the request via the response header. Other implementers of XDR SHOULD consider blocking access to private address spaces (e.g. RFC 1918) from public address spaces.
- XDR may foster an environment in which sites request user credentials
for third-party sites from users resulting in an environment where users give
their passwords to any sites that request them causing phishing and fraud attacks.
XDR v1 explicitly addresses the scenario of anonymous access to public data. We advise developers against using XDR to send user credentials. There are alternatives to XDR in existence today like HTML 5.0's own cross document messaging and the common technique of server side proxying used by sites that deal with sensitive information today like major financial institutions and tax preparation web-applications. As detailed in Section 5 of this paper, we have concerns with the approach CS-XHR is using for sending credentials cross domain and have outline other alternative techniques that warrant further investigation.
- XDR forces all content sent in POST entity bodies to be labeled
as Content-Type: text/plain, regardless of the type. This may require servers
to ignore the Content-Type header and apply sniffing heuristics to detect the
actual type of the content sent, potentially leading to privilege escalation
attacks (e.g. if the user thinks he's uploading a PNG but the server sniffs
it as an HTML file and sends it back as such).
The reason why we do this is to keep within the capabilities of what's allowed with HTML Forms today, as specified in HTML 4.0. In addition, it's well known that the Content-Type header cannot be relied upon to determine content type and servers must be robust against this. This may happen out of band. However, we'd love to hear community feedback here on the subject.