Web Security

Trust but Verify - Subresource Integrity

April 14th, 2020 | By Paulo Silva | 4 min read

Trust but Verify: Subresource Integrity is a post from a blog series about trust. This may sound strange since, in security, we learn that we should not trust anyone or anything.

In the not-so-distant past, to deal with the browsers’ maximum concurrent connections to the same domain, we started spreading content across subdomains controlled by us. Most of the time, they were just an alias for our application or website domain.

Then, to speed up our application, we moved our applications and website's static contents to third-party storage, accessible over HTTP(s). On that day, CDNs (Content Delivery Networks) entered our relationship with the end users.

Trusting third parties to deliver our JavaScript

From that day on, we started trusting third parties to store our content and later serve it directly to our users. The problem is that we don’t have guarantees that end-users will receive what we put in that storage.

Now let’s narrow the discussion to JavaScript.

As JavaScript libraries (e.g., jQuery) grew in popularity and became almost ubiquitous, we also began trusting third parties to deliver those libraries directly to our users’ browsers. After all, being a not-so-small static JavaScript asset, it was a good way to save some bandwidth.

All of us know the importance of JavaScript in our applications and websites and how multiple JavaScript libraries interact with each other. But what if //some-third-party.tld/jquery-latest.js is not what we were expecting? How bad can things get if some third-party.tld is hacked and jquery-latest.js is modified, perhaps to hold extra malicious source code?

Getting straight to the point: by trusting third parties to deliver our JavaScript we’re allowing them not only to interfere with our applications/websites but, most important, with our users. But our users only trusted us in the first place. Do you see where the conflict is?

Last May, we were at OWASP AppSec EU 2015 where we met Frederik Braun from Mozilla, who was presenting “Using a JavaScript CDN that cannot XSS you—with Subresource Integrity”, about Subresource Integrity, W3C Working Draft. This work goal is:


Compromise of the third-party service should not automatically mean compromise of every site which includes its scripts. Content authors will have a mechanism by which they can specify expectations for content they load, meaning for example that they could load a specific script, and not any script that happens to have a particular URL.


Keeping things simple, as they should be, this working draft introduces a new attribute integrity to the <script /> tag, which enables the developer to tell the browser to perform an integrity check before executing the script: execute the script only if it digest is the one provided.

<script src="https://analytics-r-us.com/v1.0/include.js" integrity="sha256-SDfwewFAE...wefjijfE" crossorigin="anonymous"></script>


Hash digests have been used for years to check the integrity of downloaded files.

The content provider gives you the file and publishes a digest on their server. After downloading, you compute the local file digest using the same cryptographic hash function (usually MD5) and compare it with the published one. If they match, you are good to go; otherwise, the downloaded file is deemed corrupted.

Subresource Integrity scenarios

Subresource integrity can also be used in scenarios where you are loading third-party Mashups or widgets. Sometimes that code will be very dynamic, and as such, it will not be a very good candidate for hashing. But if you are loading dynamic code from a partner website, you’d better trust him, right?

Maybe that can be alleviated by splitting what is static in that mashup and what is dynamic, and then trying to reduce the dynamic part to a JSON that you can validate on your side.

Consider another scenario where the client-side code may use this feature to verify if the scripts that are being loaded from its server are valid. This may sound a bit silly at first.

Usually, we don’t trust the client, but the server is usually trustworthy, and using good channel encryption protects you from corrupted code being loaded. However, recent vulnerabilities were discovered in OpenSSL, MitM (Man-in-the-Middle) attacks were proven to be still around, and Man-in-the-Browser (MitB) attacks are constantly surging on client devices.

Using Subresource integrity to check resources loaded from the server could be an extra verification that those attacks would have to pass. And many MitMs and MitBs are still too simple, so this would be an extra hurdle.

Conclusion

Subresource integrity is a (not so) simple improvement that can make a huge difference in web application security.

Other techniques, like Content Security Policy (CSP), are available and should be part of any application security policy.

A final thought about trust: Should we trust (free) obfuscators?

Jscrambler

The leader in client-side Web security. With Jscrambler, JavaScript applications become self-defensive and capable of detecting and blocking client-side attacks like Magecart.

View All Articles

Must read next

Web Security

Beyond Obfuscation: JavaScript Protection and In-Depth Security

JavaScript protection is much more than obfuscation. With resilient obfuscation and runtime protection, you can prevent serious client-side attacks.

June 17, 2020 | By Jscrambler | 5 min read

Web Security

The Integrity of your JavaScript Applications Is Being Compromised And You (Don’t) Know It

Companies are using JavaScript nowadays to build applications and websites. Most of them are unaware that their applications don’t run as designed.

June 7, 2016 | By Jscrambler | 4 min read

Section Divider

Subscribe to Our Newsletter