[OpenAjaxSecurity] Blog entry attacks W3C access control spec

Gideon Lee glee at openspot.com
Thu Aug 30 09:47:07 PDT 2007


I agree with Howard that the sandbox module approach provides a more flexible and proven security model.  There is a totally different class of problems that can be solved if the service-side can supply not only data but also code running in a sandbox.  

On the blog and the w3 draft, it is worth pointing out first the difference in usage scenarios anticipated by the draft and the blog.  According to 1.2 (Security Considerations), the proposed mechanism is aimed primarily at enabling trusted EAI solution on LAN, where the (trusted) web service providers can declare a simple wildcard list of who the (trusted) consumers are.  

This mechanism is quite an improvement over a per-site "crossdomains.xml" (ala Flash) because it is done at the per-resource level.  Theoretically at least, a very diligent service provider can always create a uniquely named service per consumer-site/end-user pair to allow for rather fine grain access control.

So this is really the kind of bread-and-butter mechanism that EAI developers have wanted for years. To that limited end, and to the extent that developers are made aware of what this mechansim is meant to be, I think it is a nice proposal.

But it runs the risk of false advertising if it is marketed as a general remedy for XSS attack!  IMHO, it provides a useful step towards the remedy, but not the remedy itself. Technical glitches outlined by the blog aside, there is a set of fundamental trust relationship issues that need to be resolved before a proposal like that is useful.

Now, I assume that it is meant as a remedy of how JSON is currently used today, where the data consumer is exposed to the risk of untrustworthy data providers.  And there, the anticipated usage scenario is already different than the w3 draft: because we are talking about data consumers who cannot determine the trustworthiness of the data providers.

Trust goes both ways.  If the consumers cannot generally trust the providers, the providers cannot generally trust the consumers either!  So, we merely reverse the asymetry. If the practical implication of JSON is that many mash-up sites putting their trust on a few wellk-known data providers, this mechanism in substance merely reverses it to encourage many data providers putting their trust on a few mega consumer-portal sites.  

That may well be good enough, for the almost-completely-trusted portals can act as the middlemen brokering multiple partially-trust relationships:-  If a user has a partial-trust relationship with provider A and a partial-trust relationship with provider B, the portal C can act as the middleman to move data between A and B without either exposing each other's identity (A to B/B to A) or the user's identity (U to A/U to B).  

**BUT**, the world is really more complicated than that!  Even if we can postulate that everyone trusts C, what if A or B are willing to serve only some U, but not others? What if A is willing to serve U if and only if B is not serving U at the same time? These still require high level negotiation betwwen A-B and C beyond the binary. C really can't do a good job mediating the partial-trust relationships.  

It might be going too far for us to attempt to address those issues here.  But as long as principals have incentives to engage in XSS, there will always be XSS.  A better solution ought to reduce the incentive itself...

Best,

Gideon




  ----- Original Message ----- 
  From: Howard Weingram 
  To: OpenAjax Alliance Security Task Force 
  Sent: Thursday, August 30, 2007 7:24 AM
  Subject: Re: [OpenAjaxSecurity] Blog entry attacks W3C access control spec


  Note that if we are thinking about isolating modules within the 
  web page (son of sandbox), then access permissions would 
  need to be specified per module; just because a script in 
  module #1 can invoke a web service, doesn't mean that 
  a script in module #2 should be allowed to invoke that 
  web service. 

  I realize that the issues discussed in the post are much more
  fundamental than this, however. 

  Best Regards,
  Howard



----------------------------------------------------------------------------
    From: security-bounces at openajax.org [mailto:security-bounces at openajax.org] On Behalf Of Jon Ferraiolo
    Sent: Thursday, August 30, 2007 1:18 AM
    To: security at openajax.org
    Subject: [OpenAjaxSecurity] Blog entry attacks W3C access control spec


    Being on vacation this week, I haven't had time to do anything but skim this article yet, but what I saw from skimming re-inforced my concerns. It would be great if others on this task force could read through the blog and the access control spec and volunteer your analysis.

    Jon


    http://www.gnucitizen.org/blog/i-dont-think-that-you-understand-firefox3-vulnerable-by-design


    I don't think that you understand! - Firefox3 Vulnerable by Design
    published: August 25th, 2007
     

    I was going to through the latest entries in my feed reader, when I stumbled upon Mozilla Aims At Cross-Site Scripting With FF3. Wow, this is interesting. So I clicked on the link and started reading. The more I read the more I knew it was a big screw up from the start. 

        Mozilla is aiming to put an end to XSS attacks in its upcoming Firefox 3 browser. The Alpha 7 development release includes support for a new W3C working draft specification that is intended is secure XML over HTTP requests (often referred to as XHR) which are often the culprit when it comes to XSS attacks. XHR is the backbone of Web 2.0 enabling a more dynamic web experience with remote data.
    Uh? What is that? How is that going to prevent XSS. But wait, it is getting even more interesting. 
        "Cross site XMLHttpRequest will enable web authors to more easily and safely create Web mashups," Mike Schroepfer, Mozilla's vice president of engineering, told internetnews.com.
        A typical XSS attack vector is one in which a malicious Web site reads the credentials from another that a user has visited. The new specification could well serve to limit that type of attack though it is still incumbent upon Web developers to be careful with their trusted data.
    First of all, this technology is not going to prevent XSS. This is guaranteed. Second, it may only increase the attack surface since developers will abuse this technology as it is the case with Adobe Flash crossdomain.xml. And finally, the proposed W3C specifications are insecure from start. Let's see why this is the case. 
    The specification describes a mechanism where browsers can provide cross-domain communication (something that is currently restricted by the same domain policies) via the all mighty JavaScript XMLHttpRequest object. In order to grant access to external scripts you can do that by using either of the following ways: 

    Content-Access-Control header 

    The idea is that the developer provides an additional header in the response. Here is an example: 

    Content-Access-Control: allow <*.example.org> exclude <*.public.example.org>
    Content-Access-Control: allow <webmaster.public.example.org> 

    So, as long as the response contains a header that specifies that the requesting site, which hosts the script, can access the content, no domain access restrictions will be applied. The bad news for this approach is that there is an attack vector known as CRLF Injection. If any part of the user supplied input is used as part of the response headers, attackers can inject additional header to grant access. Here is a scenario where this attack can be applied: 

    Case study 1: MySpace implements a new AJAX interface for the user contact list section. The list is delivered as XML. This REST service contains a couple of parameters. One of them is used as part of the headers. Although by default attackers cannot read the XML file due to the same origin policies, now they can trick the browser into letting them do so via CRLF injection. The attack looks like the following: 

    var q = new XMLHttpRequest();
    q.open('GET', 'http://myspace.com/path/to/contact/rest/service.xml?someparam=blab%0D%0AContent-Access-Control: allow <*>');
    q.onreadystatechange = function () {
     // read the document here
    };
    q.send() 

    Ups!. This is how we tricked the browser into believing that the above site grants us with full access to the user private contact list. But wait, this is not all. I think that W3C forgot about the infamous TRACE and TRACK methods and the vulnerabilities that are associated with them. Cross-site Tracing attacks are considered sort of theoretical because there is no real scenario in which attackers can take advantage of them. On way to exploit XST, is to have access to the target content via XSS, but if you have XSS then what's the point. However, if the new spec is implemented, now we have a whole new attack vector we need to worry about. So, we are not really fixing the XSS problem, we are in fact contributing to it. Here is a demonstration Cross-site tracing attack against MySpace again. 

    var q = new XMLHttpRequest();
    q.open('TRACE', 'http://myspace.com/path/to/contact/rest/service.xml');
    q.setRequestHeader('Content-Access-Control', 'allow <*>'); // we say to the server to echo back this header
    q.onreadystatechange = function () {
     // read the document here
    };
    q.send(); 

    That was too easy. I hope that FF3 restricts the XMLHttpRequest object to set Content-Access-Control header, but then I guess we can use Flash or Java to do the same or at least somehow circumvent FF header restrictions. I don't know. 

    And finally I would like you to pay attention on the fact that the browser verifies about the script access control after the request is delivered. Uh?. Haven't you learned? CSRF!!! This means that now we can make arbitrary requests to any resource with surgical precision. Port scanning from JavaScript will become as stable as it can get. Why? you may ask. Here is a demo: 

    try {
     var q = new XMLHttpRequest();
     q.open('GET', 'http://<some host>:<port of interest>');
     q.onreadystatechange = function () {
       if (q.readyState == 3) {
         // port is open
       }
     };
     q.send();
    } catch(e) {} 

    This port scanning method does not work today, but it will if you implement the W3C standard. With the current browser specifications, the above code will crash and burn at q.send(); step. It won't fire a request unless the origin matches with the current one. However, with the new spec on place, the q.send(); step will fire. Then, while loading the document, the onreadystatechange event callback will be called several times for states 0 (uninitialized), 1 (open), 2 (sent), 3 (receiving). At stage 4 (loaded), the request will fail with a security exception. However, we've successfully passed stage 3 (receiving) which has acknowledged that the remote resource is present. Here is a simple script that can be used to port scan with the new W3C spec. It should be super accurate: 

    function checkPort(host, port, callback) {
     try {
       var q = new XMLHttpRequest();
       q.open('GET', host + ':' + port);
       q.onreadystatechange = function () {
         if (q.readyState == 3) {
           callback(host, port, 'open');
         }
       };
       q.send();
     } catch(e) {
       // check the exception type... {
       callback(host, port, 'closed');
       // }
     }
    }

    for (var i = 0; i < 1024: i++) {
     scanPort('target.com', i, function (host, port, status) {
       console.log(host, port, status); // do something with the result
     });
    }
    <?access-control?> processing instruction 

    Ok. Bad news. But check this out. W3C standard suggests that we can embed the access control mechanism into the XML document itself. Here is an example: 

    <?access-control allow="*"?>
    <list>
     <email>joe at avarage.com</email>
    </list> 

    This cross domain access control mechanism is also subjective to TRACK/TRACE and CSRF (PortScanning and State detection) vulnerabilities. Luckily, it is not vulnerable to CRLF Injection. However, in case the internal FF or IE XML parsing engine is vulnerable to some buffer overflow, we will be screwed big time. But this is another story and I guess it requires more research and of course the presence of a software vulnerability. Keep in mind that I am just elaborating here. 

    In conclusion 

    For God's sake, do not implement the standard. Can't you see? It will open a can of worms (literally). And please, don't say that this specification will prevent XSS. It doesn't? I see how the W3C spec will enable developers to go further and do even more exciting on-line stuff but does it worth the price? You tell me, cuz I don't know what the heck your have been thinking. 

    WARNING: None of the above attacks have been verified. The conclusion about possible vulnerabilities withing the specifications have been drawn by simply looking at the W3C working draft. However, given the fact that Firefox follows specifications to the extend no other browser vendor does, there is a high chance that the vulnerabilities mentioned above may work. Thank you.



------------------------------------------------------------------------------


  _______________________________________________
  security mailing list
  security at openajax.org
  http://openajax.org/mailman/listinfo/security
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://openajax.org/pipermail/security/attachments/20070830/b2711b65/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 35040 bytes
Desc: not available
Url : http://openajax.org/pipermail/security/attachments/20070830/b2711b65/attachment-0001.jpe 


More information about the security mailing list