> An extreme (but not impossible) attack to mount is the creation of HSTS supercookies. Since HSTS effectively stores one bit of information per domain name, an adversary in possession of numerous domains can use them to construct cookies based on stored HSTS state.
> HPKP provides a mechanism for user tracking across domains as well. It allows abusing the requirement to provide a backup pin and the option to report a pin validation failure. In a tracking scenario every user gets a unique SHA-256 value serving as backup pin. This value is sent back after (deliberate) pin validation failures working in fact as a cookie.
> Design Goal: HSTS and HPKP MUST be isolated to the URL bar domain.
> Implementation Status: Currently, HSTS and HPKP state is both cleared by New Identity, but we don't defend against the creation and usage of any of these supercookies between New Identity invocations.
HSTS is trying to protect against a specific kind of Man-in-the-Middle (MITM) attack: when the man in the middle pretends that the website you are trying to access does not support HTTPS.
I believe trying HTTPS first wouldn't help: the MITM would refuse your connection, and your browser will try HTTP after that.
With HSTS, the server tells your browser that it is going to support HTTPS for a while. Now, if your first connection to server is secure (no MITM), from now on your browser will know that this particular domain supports HTTPS. So, it will know something fishy is going on if a MITM tries to pretend otherwise.
Trying HTTPS first would still help a lot in other cases, such as the one in the article. None of the super cookie HSTS techniques would have worked in the first place if the browser had just always tried to use HTTPS first.
Probably other unknown vulnerabilities could be averted by just trying HTTPS first too. Not doing so should be considered bad practice, with or without HSTS.
Probably there will come a time when attempting HTTPS first instead of HTTP for manually-typed URLs without a protocol is the right default, but that's just a subset of the problem.
Because any attacker in the middle can purposefully make the https connection fail, thereby causing all data to be sent in the clear. HSTS and hsts preload makes sure that this can not happen.
Arguably, they already are broken. Plenty of people type in https out of habit already. If your website doesn't handle that correctly, you're in for trouble.
I don't know anybody who types https or http before typing a URL. Some older people I know type www, but even that has decreased in the last 10 or so years.
The worst thing is some websites do understand what HTTPS is but still refused to deploy it, and to handle users who consciously type https://, they deployed a valid HTTPS certificate on the webserver and issues 301s to redirect them back to http://. slashdot.org used to do this, bbc.co.uk still does it, what a shame. I know it's often a stop-gap measure to prepare for the upcoming universal deployment, but it seemed most websites that use this hack don't have any plan to secure their sites.
That's what EFF's HTTPS Everywhere plugin does when set to strict mode.
On a related note, would strict mode thwart HSTS-based cookies?
My current understanding of this technique is that it depends on the victim being able to connect to HTTP. Since strict mode prevents the victim from making normal HTTP connections, I'm inclined to believe that strict mode does help mitigate this kind of tracking.
However, the internet is a wide space full of unique and amazing uses of Web Technology. If you feel that you have a legitimate case where these new rules are not working as intended, we would like to know about it. Please send feedback and questions to web-evangelist@apple.com or @webkit on Twitter, and file any bugs that you run into on WebKit’s bug tracker.
I appreciate this part. Do they actually respond to these inquiries?
Greater use of HSTS preloading is also a good way for legitimate sites to prevent being affected by any sort of privacy crackdowns on HSTS. Preloading at the TLD level is ideal.
Preloading at the TLD level is pretty severe, will never be done for .com / .net / .org, and so will never apply to the vast majority of sites people visit. Interesting idea, but unrealistic.
My interpretation was that it can only be set either on the exact hostname that was loaded, or on the second level domain name that was loaded. To set it on 32 different SLDs you'd have to redirect through 32 different domains, which the user would notice.
Note that this is for the page that was loaded only, not for additional resources hosted on other hostnames.
Redirecting 32 times could be made less obvious by opening a small popup which closes after finishing it's work. It used to be possible to open a popup in background, that would have made it almost practical.
The public suffix is more correct than the TLD; however the fact that publicsuffix.org accepts private domains at the domain owners request weakens the mitigation.
> We modified WebKit so that when an insecure third-party subresource load from a domain for which we block cookies (such as an invisible tracking pixel) had been upgraded to an authenticated connection because of dynamic HSTS, we ignore the HSTS upgrade request and just use the original URL.
They just assume that 3rd party shouldn't ever have a need to send "http://" link only to redirect to "https://" and are willing to stop that feature from working even if it can lead to false positives and just break some sites.
I feel like I'm missing something but wouldn't this whole brouhaha be avoided by making the website 100% HTTPS and redirecting any HTTP requests to HTTPS through the server itself? This is common practice with Nginx, for example.
FYI re Tor browser:[0]
> 15. HSTS and HPKP supercookies
> An extreme (but not impossible) attack to mount is the creation of HSTS supercookies. Since HSTS effectively stores one bit of information per domain name, an adversary in possession of numerous domains can use them to construct cookies based on stored HSTS state.
> HPKP provides a mechanism for user tracking across domains as well. It allows abusing the requirement to provide a backup pin and the option to report a pin validation failure. In a tracking scenario every user gets a unique SHA-256 value serving as backup pin. This value is sent back after (deliberate) pin validation failures working in fact as a cookie.
> Design Goal: HSTS and HPKP MUST be isolated to the URL bar domain.
> Implementation Status: Currently, HSTS and HPKP state is both cleared by New Identity, but we don't defend against the creation and usage of any of these supercookies between New Identity invocations.
0) https://www.torproject.org/projects/torbrowser/design/ [February 19th, 2018]
Why don't browsers just try https first? So if I type "example.com" into the box it tries https://example.com.
Why is implementing HSTS and preserving the current behavior more useful?
HSTS is trying to protect against a specific kind of Man-in-the-Middle (MITM) attack: when the man in the middle pretends that the website you are trying to access does not support HTTPS.
I believe trying HTTPS first wouldn't help: the MITM would refuse your connection, and your browser will try HTTP after that.
With HSTS, the server tells your browser that it is going to support HTTPS for a while. Now, if your first connection to server is secure (no MITM), from now on your browser will know that this particular domain supports HTTPS. So, it will know something fishy is going on if a MITM tries to pretend otherwise.
Trying HTTPS first would still help a lot in other cases, such as the one in the article. None of the super cookie HSTS techniques would have worked in the first place if the browser had just always tried to use HTTPS first.
Probably other unknown vulnerabilities could be averted by just trying HTTPS first too. Not doing so should be considered bad practice, with or without HSTS.
Especially there is no reason, if I type news.ycombinator.com in my address bar to expand with http:// instead of https://
HSTS also protects external URLs; an old link to http://news.ycombinator.com gets internally rewritten to https://news.ycombinator.com without making the cleartext request. So HSTS is a more general solution.
Probably there will come a time when attempting HTTPS first instead of HTTP for manually-typed URLs without a protocol is the right default, but that's just a subset of the problem.
Couldn't dmm's proposal be used for links as well?
Because any attacker in the middle can purposefully make the https connection fail, thereby causing all data to be sent in the clear. HSTS and hsts preload makes sure that this can not happen.
Because even today there are sites that serve different content on https vs. http -- basically they reserve https for payment and the like.
Doing https first would mean those sites were broken.
Arguably, they already are broken. Plenty of people type in https out of habit already. If your website doesn't handle that correctly, you're in for trouble.
I don't know anybody who types https or http before typing a URL. Some older people I know type www, but even that has decreased in the last 10 or so years.
The worst thing is some websites do understand what HTTPS is but still refused to deploy it, and to handle users who consciously type https://, they deployed a valid HTTPS certificate on the webserver and issues 301s to redirect them back to http://. slashdot.org used to do this, bbc.co.uk still does it, what a shame. I know it's often a stop-gap measure to prepare for the upcoming universal deployment, but it seemed most websites that use this hack don't have any plan to secure their sites.
That's what EFF's HTTPS Everywhere plugin does when set to strict mode.
On a related note, would strict mode thwart HSTS-based cookies?
My current understanding of this technique is that it depends on the victim being able to connect to HTTP. Since strict mode prevents the victim from making normal HTTP connections, I'm inclined to believe that strict mode does help mitigate this kind of tracking.
I remembered a demo showing HSTS's potential tracking capabilities, and it doesn't work well if HTTPS Everywhere is enabled.
While I'd also like https as a default, that's not going to prevent this tracking if you still respect the intent of HSTS.
If you try https first, and that fails, do you try again over http? Whether or not you'd fallback would leak the same information.
However, the internet is a wide space full of unique and amazing uses of Web Technology. If you feel that you have a legitimate case where these new rules are not working as intended, we would like to know about it. Please send feedback and questions to web-evangelist@apple.com or @webkit on Twitter, and file any bugs that you run into on WebKit’s bug tracker.
I appreciate this part. Do they actually respond to these inquiries?
The bug tracker and Twitter account are pretty good about responding, from what I've seen. I'm not familiar with the email.
Greater use of HSTS preloading is also a good way for legitimate sites to prevent being affected by any sort of privacy crackdowns on HSTS. Preloading at the TLD level is ideal.
Preloading at the TLD level is pretty severe, will never be done for .com / .net / .org, and so will never apply to the vast majority of sites people visit. Interesting idea, but unrealistic.
Yes, it can't be done retroactively, but it can be done for new TLDs that haven't launched yet.
If I remember correctly, Google has done this with the `.dev` TLD they own, so we're already seeing some of it.
There are currently eight HSTS-preloaded TLDs with more on the way, including the first available for open registration coming soon.
Do you have a list of them? I'm especially interested in the TLD for open registration.
See https://registry.google
I think I'm missing something. Why wouldn't this attack work on, for example: bit1.com, bit2.com, bit3.com, etc.?
My interpretation was that it can only be set either on the exact hostname that was loaded, or on the second level domain name that was loaded. To set it on 32 different SLDs you'd have to redirect through 32 different domains, which the user would notice.
Note that this is for the page that was loaded only, not for additional resources hosted on other hostnames.
Redirecting 32 times could be made less obvious by opening a small popup which closes after finishing it's work. It used to be possible to open a popup in background, that would have made it almost practical.
Or just an async script?
Maybe because that is a little bit more expensive.
$32 instead of $1. OK.
Hmm?
What about websites that are on second-level domains? E.g. amazon.co.uk
Would example.amazon.co.uk then not be able to set HSTS for amazon.co.uk?
example.amazon.co.uk and amazon.co.uk are not matching domains as defined in the RFC[1] (they are not congruent).
The includeSubDomains directive[2] allows the HSTS policy set for amazon.co.uk to apply to its subdomain example.amazon.co.uk, but not vice versa.
[1] https://tools.ietf.org/html/rfc6797#section-8.2
[2] https://tools.ietf.org/html/rfc6797#page-16
Yes, using pubsuffix+1 instead of TLD+1 would make way more sense.
I've asked here: https://twitter.com/gsnedders/status/974765437283119104
Per the response it is using pubsuffix+1, and hence co.uk is treated as an effective TLD.
The public suffix is more correct than the TLD; however the fact that publicsuffix.org accepts private domains at the domain owners request weakens the mitigation.
> We modified WebKit so that when an insecure third-party subresource load from a domain for which we block cookies (such as an invisible tracking pixel) had been upgraded to an authenticated connection because of dynamic HSTS, we ignore the HSTS upgrade request and just use the original URL.
Could someone explain this?
They just assume that 3rd party shouldn't ever have a need to send "http://" link only to redirect to "https://" and are willing to stop that feature from working even if it can lead to false positives and just break some sites.
I feel like I'm missing something but wouldn't this whole brouhaha be avoided by making the website 100% HTTPS and redirecting any HTTP requests to HTTPS through the server itself? This is common practice with Nginx, for example.