-
Notifications
You must be signed in to change notification settings - Fork 74
FAQ
You can start using the API implemented in browsers based on Chromium (it's already available in Chrome stable). Check the instructions. For non-Chromium browsers like Firefox or Safari, use the polyfill.
Most likely - yes.
The main difference between Trusted Types and Content Security Policy (CSP) is in that CSP is an exploit mitigation - it addresses symptoms of the vulnerability. It does not remove the underlying bug from the code (e.g. injection of unsanitized untrusted data into HTML markup), but rather attempts to prevent its exploitation. Depending on the nature of the specific injection, there can still be security issues (see http://lcamtuf.coredump.cx/postxss/).
Trusted Types on the other hand addresses the root cause. They help developers build applications that are fundamentally free of underlying injection bugs to a high degree of confidence.
That said, CSP is a valuable complementary mitigation. For example, Trusted Types can not address server-side injections (reflected / stored XSS), but CSP targets those as well. Given a web framework that helps with setting up and maintaining policies, it's very little effort to deploy, and there's really no reason not to use it in addition to Trusted Types.
Note that it is easy to deploy a CSP that ends up being ineffective (see e.g. CSP Is Dead, Long Live CSP whitepaper or script gadgets research). Please follow the most up-to-date recommendations from this presentation if you want to add CSP to your application.
See also #116.
In principle - no; in practice - yes.
Trusted Types aim to lock down the insecure-by-default parts of the DOM API, that end up causing DOM XSS bugs for web applications out there. Our data from Google Vulnerability Rewards Program consistently shows that DOM XSS is the most common variant of the XSS.
Additionally it allows to design applications in a way that isolates the security-relevant code into an orders-of-magnitude smalller, reviewable and controllable fragments. While it is possible that those (user-defined) functions are insecure and introduce DOM XSS, the task of preventing, detecing and fixing them becomes manageable, even for very large applications. And this is what may in practice prevent DOM XSS - our data shows that the vast majority of DOM XSSes reported to Google Vulnerability Rewards Program would be stopped by that approach (the remaining ones being e.g. a bug in the sanitization logic).
Trusted in this context signifies the fact that the application author is confident that a given value can be safely used with an injection sink - she trusts it does not introduce a vulnerability. That does not imply that the value is indeed safe - that property might be provided by a sanitizer or a validator (that might be used in Trusted Type policies internally; in fact - that's very much a recommended approach).
It's commonly thought that DOM-based XSS can be comprehensively addressed by providing a built-in sanitizer for risky values (e.g. HTML snippets). The reasoning is - if values are sanitized, XSS can't happen.
Good sanitizers in JavaScript libraries already exist, and we still struggle with XSS. Some of the bugs are beacuse of rare bypasses for the sanitizers, but according to Google's data these account for vast minority of the root causes of XSSes. In fact, most of the times, the developers simply don't sanitize the user data at all when using them with risky APIs. So even a perfect, bug-free-sanitizer would not address most of the DOM XSS problems.
The actual requirement is that the sanitizer - for example a built-in one - is called implicitly on all the sinks. If the browser does not enforce that, the most common source of security bugs - ommision of calling security rules - remains unaddressed.
However, even when a browser enforces sanitization, we're still left with a problem: web applications legimately use patterns that would be blocked by a naive sanitizer. For example, many applications load scripts dynamically, and some use inline event handlers - or eval()
. Some applications want to sanitize data not only for DOM XSS prevention, but also to e.g. prevent DOM Clobbering. So every sanitizer needs to be configured for a given web application anyhow, as there needs to be an allowed way of e.g. doing dynamic script loading - and such configuration hooks must exist. Judging from existing sanitizers - the configuration options tend to grow.
Additionally, if the sanitizer is always called by the browser, it has to be one monolithic sanitizer. The complexity of rules of such sanitizer tends to grow linearly with the application. Moreover, the sanitizer behavior needs to be consistent over time as developers expect Web APIs to be stable. What follows is that it becomes tricky to even fix some bypass bugs in the built-in sanitizer, as the code change may cause existing applications to break.
Trusted Types aims to address the problem from a different angle. Instead of focusing on neutralizing the string values by pushing it through a centralized sanitizer, it allows to lock down risky APIs (like the DOM XSS sinks) to only allow certain objects. The security then comes from controlling how those objects are created.
This has several advantages:
- there can be multiple sources (policies) of allowed objects, making securing modular applications easier, and allowing the security rules to be small and isolated from the rest of the application.
- the authors are in control of the rules. That lets them review and lock down the rules, develop and fix the bugs in them together with the application.
- it provides good, type-based primitives to build future Web APIs and user libraries on top of. For example, when we notice that developers struggle with writing secure rules, we may implement a Web API can provide that - and return Trusted Type instances, or policies.
As a sidenote - Trusted Types can use a sanitizer, even a built-in one. For example:
// Content-Security-Policy: require-trusted-types-for 'script'; trusted-types default;
trustedTypes.createPolicy('default', {
createHTML: navigator.sanitizeHTML,
});
does exactly that.
Initially, Trusted Types did not have a concept of the policies - one could instantiate a TrustedHTML
value by simply calling a static function. We believe policies are an improvement over that, even though it makes the API shape more complex.
- Policies add an effective security control point. The application owners may lock down their application by guarding policy creation. It's much more feasible to inspect and review one or two places in the application where the policy with their rules are created, rather than every place that creates a type instance that will end up in an XSS sink. In other words, grepping for
new TrustedHTML
is only a mild improvement over grepping for.innerHTML
, as there would be 100s of such places. - Policy creation can be centrally controlled: For example,
Content-Security-Policy: trusted-types libfoo libbar
heaeder says "I'm OK ith the policies thatlibfoo
andlibbar
create, but I want no other". We weren't able to come up wih an alternative, sensible browser API that centrally limits how each instance of a type is created. - Policies allow the developers to specify the sanitization rules - in fact, it forces them to explain in code the security rules governing their their application (where do we load the scripts from? Do we allow eval?). The rules may be different, or even insecure, but they have to be explicitly provided, allowing the security folks to analyze them (and, perhaps, scream in pain).
- Policy objects serve as capabilities to create types, allowing the application owners to limit where the types get created at all. For example, certain policies may be used only within a certain module, when exposing them globally would be insecure.
- Policies allow for controlled delegation of security decisions. Nearly all applications - even small ones - have dependencies that authors cannot or should not gratuitously rewrite. Oftentimes, those dependencies require different sanitization strategies. Creating additional policies and handing them to other code parts allow a page to delegate sanitization, without loosing control over the overall security policy.
const policy = trustedTypes.createPolicy('foo, {createHTML: s => /* insecure! */ s });
export const BOOTSTRAP_HTML = policy.createHTML(
'<application-template><script>bootstrap();</script></application-template>');
The amount of complexity added by policies is rather minimal: There's one accessor (trustedTypes
) as the entry point, and createPolicy
as the singular entry point to creating them. The complexity can always be removed in user libraries, whereas adding neccessary granularity to an API that doesn't provide it is hard.
There are two parts to TT:
- the JavaScript API (e.g.
trustedTypes.createPolicy
function), and - the enforcement via CSP (
require-trusted-types-for
directive).
A surprising feature of Trusted Types (TT) - in sharp contrast from other XSS mitigations - is that you might get their benefit even in browsers that don't support Trusted Types. The reason is that TT enforcement (via CSP header) forces you to change the code of the website to use the JavaScript API and create a Trusted Type instead of a string (and, e.g. perform appropriate data sanitization). So, in the end, if your application is fully TT compliant (i.e. it doesn't cause TT violations in TT-supporting browser), there are no data flows taking a string user input, and passing it to the sink without going through the TT policies.
To make sure that the same sanitization logic takes place in all browsers, you just have to make sure that there is at least a mock support for trustedTypes.createPolicy
. This can be achieved by using the api_only polyfill or the tinyfill. After that, all string data ending up in DOM injection sinks flows through the TT policy rules you created - and this is what addresses DOM XSS. The CSP enforcement is just asserting that your "TT coverage" is full.
There's a couple of caveats here:
- This doesn't work if you are using a default policy. This behaviour has to be fully polyfilled to reap its benefits.
- This doesn't cover
javascript:
URL blocking, which TT enable by default. CSP with ascript-src
withoutunsafe-inline
keyword can be used for that purpose. - This doesn't cover guarding which policies can be created in your application (the
trusted-types
directive). - Browser-specific code that interacts with DOM may still introduce DOM XSS. TT violations in this code might not have been detected when migrating the application to TT.