cremp 6 years ago

Is it me or...

> If they’re willing to convert all their customers to ESNI at once

Why does it seem like this is over-engineering at it's finest? Not only are CDNs now part of the problem/solution space, but they are now dictating.

It is now that much harder to diagnose issues when they do crop up, instead of checking ping or nslookup. Now, you've got to see if the DNS-over-HTTPS/The DNS record itself/Host/client/any number of other steps is broken.

We've completely removed the ability for a poweruser to diagnose before calling their resident IT professional.

  • the8472 6 years ago

    It's even worse. To use ESNI you need DOH. To use DOH you need a resolver with a server certificates, which is kindly offered by the same cloud providers. So now all your base are belong to cloudflare.

    • eridius 6 years ago

      None of this is tied to Cloudflare though. Or really to using a cloud provider at all. Of course, if you're not using a cloud provider, then the IP address can be used to figure out what the site is, but the point remains that you can pick from any cloud provider that supports ESNI (and, separately, pick from any DNS provider that supports DOH).

      At the moment if you want ESNI it looks like you have to use Cloudflare, but the solution to that is to encourage other cloud providers to support ESNI rather than to decry the notion of ESNI.

      • the8472 6 years ago

        > (and, separately, pick from any DNS provider that supports DOH).

        But why do I have to? I already have a trusted DNS resolver operated by myself wired to my OS. Why require the whole DoH rube goldberg machinery to let me try ESNI?

        • floatingatoll 6 years ago

          DNS is plaintext, like HTTP, so running your own resolves does nothing to protect your internet provider from selling your domains resolved list - in aggregate or in specific - to other companies for revenue.

          There are three well-known trusted public DNS resolvers, run by Cloudflare, Verizon, and Google.

          Which of those three would you encrypt your DNS traffic to, if those were the only three options available other than plaintext for all to see?

          • davidu 6 years ago

            DNS does not need to be plaintext and DoH is not the only alternative. It's the alternative that the advertising engines and CDNs prefer because it extends control. The privacy of DNS argument is a major red herring.

            DNSCrypt (or DNS over TLS or DTLS) is a wonderful alternative that works in-band and works with DNSSEC.

            People are also ignoring the consequences of the switch from UDP to TCP.

            • nine_k 6 years ago

              I remember that DNS over HTTPS just landed recently in the generally-available Firefox 62?

          • rocqua 6 years ago

            I think the specific question is: Why prefer `DNS over HTTPS` over e.g. `DNS over TLS`. The relevant matter is that DNS requests are encrypted. How exactly it is encrypted should not matter.

            • Ajedi32 6 years ago

              AFAIK it doesn't matter. DNS over TLS would work fine too (though I'm not sure if Firefox supports it). The important thing is that you're not using plaintext DNS, as that would defeat the purpose of using ESNI in the first place.

            • u801e 6 years ago

              What about applications that communicate over protocols other than HTTP(S)?

        • lilyball 6 years ago

          If your browser is talking to a DNS resolver wired to your OS, that doesn't change what any upstream network observer sees, since they can observe DNS requests generated by your OS-level DNS resolver exactly as they would observe DNS requests generated by your browser.

          DOH isn't about trust. It's about preventing network observers from figuring out what sites you visit by observing the DNS requests you make.

          • tialaramex 6 years ago

            Not _only_ about trust. One of the things DoH gets you for free is that it means that your ISP doesn't get to touch DNS requests, which was not true previously. You can get Internet service from whatever bunch of money-grabbing assholes are available where you live, and get DNS from somebody else without that being tampered with since it's encrypted on the wire. You do still have to trust the DoH provider (outside of DNSSEC).

            In the UK for example, the current government proposals for yet more Internet censorship assume that they can "just" order ISPs to censor DNS. This, their white paper says, is relatively cheap and so ISPs might even be willing to do it at no extra cost, which is convenient for a supposedly "small government / low tax" party that keeps thinking of expensive ways to enact their socially regressive agenda...

            But DoH and indeed all the other D-PRIVE proposals kill that, to censor users with D-PRIVE you're going to have to operate a bunch of IP layer stuff, maybe even try to do deep packet inspection, which TLS 1.3 already made problematic and eSNI skewers thoroughly.

            So there's a good chance this sort of thing for _ordinary_ users (the white paper already acknowledges that yes, people can install Tor and it can't do anything about that) makes government censorship so difficult and thus expensive as to be economically unpalatable. "Won't somebody think of the children" tastes much better when it doesn't come with a 5% tax increase to pay for it...

            • xg15 6 years ago

              Conventiently, the same will also apply to me trying to block apps from talking over the network with things they should not talk to.

              • Ajedi32 6 years ago

                You could always point your DNS at your own DNS-over-HTTPS server, then configure that server to forward requests over an encrypted connection to another DNS-over-HTTPS server.

                Don't know if there are any tools available right now that will do that for you, but there's no technical reason why it isn't possible.

                • xg15 6 years ago

                  I think the problem will be apps with a DoH service hardwired. There wouldn't be anything for me to point anywhere short of patching the app.

                  Yes, apps could theoretically already do this today if the developers are willing to run their own endpoints. However, my guess is this will become vastly easier to do when there are already public DoH endpoints available to connect to.

          • vetinari 6 years ago

            There isn't anything that would prevent the local resolver to talk upstream using DoH.

            Same privacy, minus cloud companies that try to insert themselves as middleman.

            • eridius 6 years ago

              DOH does not rely on cloud companies. The requirement for DOH has nothing to do with cloud companies "try[ing] to insert themselves as middleman". You certainly could argue that ESNI should be used even if DNS isn't done over DOH, in case you trust the network path to your recursive resolver and you trust the recursive resolver itself to be using DOH, but that also has nothing to do with cloud companies. I admit I'm not sure why ESNI requires DOH (it's not particularly useful without DOH, but that's not an argument to disable it); my guess is simply due to the additional resource usage necessary to process ESNI requests, and so Firefox doesn't want to put this additional load on the server if there's no practical benefit to doing so. Ideally, using an OS-level resolver would have a way to tell the browser that the recursive resolution was encrypted (whether via DOH or something else it doesn't really matter), but I'm not aware of any way to do that at the moment.

              • vetinari 6 years ago

                DoH by itself does not, but the way it is getting implemented, most users will just happen to be dependent on them. That's why I wrote that.

                Instead of pushing this functionality into OS resolvers and standalone resolvers used by networks, it is being pushed into commonly used applications, with cloud companies providing the other end by default.

                ESNI doesn't require DoH, but there's no point of using it without one, if your network can check the DNS records you are asking for, and then check the ecrypted SNI against it (it will have the same key you are using to encrypt, so it can do the same and match).

                Some resolvers (systemd-resolved, ducks and hides) do use custom API to attach properties to responses. That was the only way to get DNSSEC status, so it can be reused to indicate upstream protocol too. However, not many applications use that, they rely on standard gethostbyname(), which doesn't provide anything similar.

                • eridius 6 years ago

                  Most users will just happen to be dependent on cloud providers for DoH not because of anything inherent to DoH, but because at the moment only cloud providers are offering DoH-enabled resolvers. If you don't like this, then instead of decrying the usage of DoH, you should be pushing for more DNS providers to offer DoH.

                  I do agree that pushing DNS functionality into apps instead of the OS level is suboptimal, and I certainly hope that, if Firefox proves that DoH works well, it will be adopted by the major OS's (along with a way to query the OS resolver to check if it's using DoH or not) so that all apps can benefit from it instead of just web browsers that reimplement DNS.

                  Of course, IIRC Chrome at least (not sure about Firefox) has already been implementing DNS resolution itself for a long time rather than relying on the OS resolver, so the idea of a web browser doing DNS directly instead of relying on the OS is not a new one. I'm not sure why Chrome does this though.

              • the8472 6 years ago

                > Ideally, using an OS-level resolver would have a way to tell the browser that the recursive resolution was encrypted

                A simple flag to configure this would have done the job. I don't like how browsers are pretending that they have security needs that are special compared to any other application and thus need to pull in the whole network stack and bypass the OS on everything.

                It causes duplicate effort if you want to secure your whole network instead of only the browser. It also limits technology choice. I'm forced to use DoH even though there are other options.

          • the8472 6 years ago

            I am perfectly capable of having my DNS resolver use a different, encrypted route than the HTTPS traffic.

            This is not mozilla's decision to make.

    • jopsen 6 years ago

      But if you're using CloudFlare through Firefox, Mozilla is doing collective bargaining on your behalf.

      It's a different world, true, but technology can't be stopped. If Mozilla succeeds in being a agent negotiating on behalf of users, all your base might be governed by reasonable contracts.

      • auslander 6 years ago

        Thing is, it does bargain, and trusts third-party privacy policy, but I, for example, do not trust Cloudflare.

        "We’ve chosen Cloudflare because they agreed to a very strong privacy agreement" [0]. Like, legally agreed? With regular audits and full access for Mozilla people?

        Where does that leave me, if it gets baked into my browser?

        [0] https://blog.nightly.mozilla.org/2018/06/01/improving-dns-pr...

        • jgrahamc 6 years ago

          The quote you make there from your reference [0] has a link to the legal agreement with Cloudflare. It's here: https://developers.cloudflare.com/1.1.1.1/commitment-to-priv... So you can read the legal agreement.

          Of course, if you still don't want to use Cloudflare for DoH you can just configure your favourite resolver in Firefox itself. The blog you refer to as [0] contains detailed instructions on how to do that.

          So, where are you left? Right where you are today: you control the DNS resolver on your machine today. With Firefox Nightly you also control the DoH resolver (and can disable it entirely).

          • auslander 6 years ago

            My concern is whether this integration with CF will make its way into default FF install.

            • jopsen 6 years ago

              But if you do DNS resolution yourself you'll loose privacy.

              You'll probably always be able to run your own. If you so desire.

              • auslander 6 years ago

                I use DNS over TLS via multiple resolvers, so not my case.

                Privacy is a fragile thing. Is it better putting all my lookups in one basket (CF)? For hiding from ISP, nothing beats VPN, and in this case no need for ESNI. My point is its not up to Mozilla to turn my data to third party.

                And, frankly, if choosing between ISP and CF (or Google), leaks to ISP impact your privacy much less. ISP have no global data to ML your history, no analytics cookies, no clear text traffic access.

        • jopsen 6 years ago

          You seem to complain, but I hear of no alternatives..

          The world is moving towards more cloud computing, Mozilla can't stop the centralization of the internet. But if they can use collective bargaining to protect consumers that might do a lot of good.

          • auslander 6 years ago

            Installing unbound, which supports DoT and multiple resolvers on your home router, for example

    • js2 6 years ago

      > To use ESNI you need DOH

      This is a Firefox decision, not something required by the standard:

      https://tools.ietf.org/html/draft-ietf-tls-esni-01#section-7

      ESNI is best combined with DoH to prevent snooping (hence Firefox's apparent decision to tie the two features together), but obtaining the ESNI key does not strictly require DoH.

    • AndyMcConachie 6 years ago

      It's even worse because they talk like ESNI is some kind of standard, but there's only been a single draft at the IETF written by a Mozilla employee, and that draft is still at version 1. Calm down Mozilla, maybe other people would like to comment on the design before you go and implement it?

      Doing it like this is a great way to end up in interopeability hell down the road when different parties have implemented different versions. I'm not saying they have to wait until it's an RFC, but atleast wait for a couple more versions of the draft and let the IETF discuss it a bunch first. This is a big change.

      • tialaramex 6 years ago

        > but there's only been a single draft at the IETF written by a Mozilla employee

        I can see why it might look that way, but actually draft-ietf-tls-esni-01 is the third draft of this document, and has been co-written by at least four named authors including Chris Wood at Apple. Also that "Mozilla employee" was one of the Working Group chairs.

        draft-ietf-tls-esni-01 was preceded by draft-ietf-tls-esni-00 (it is usual for early drafts to have zero zero versions)

        draft-ietf-tls-esni-00 was preceded by draft-rescorla-tls-esni which was Eric Rescorla's first write-up of this idea

        Finally, though this document didn't exist twelve months ago, the "issues and requirements" document did. This document imports the thinking behind that document, it just provides an implementation and now Firefox is testing it.

        The reason for the name change is a thing called "adoption". The TLS Working Group agreed by consensus to adopt this piece of work, rather than it just being independent stuff by a handful of people who coincidentally were working group members. When that happens the draft's name changes, to reflect the adoption (removing a single person's name) and sometimes to use more diplomatic naming (e.g. the "diediedie" draft got a name that didn't tell TLS 1.0 to "die" any more when it was adopted).

      • orf 6 years ago

        Why does Chrome get to do this but not Mozilla? Why is field testing such an implementation a bad thing? What do more comments add over real, concrete data (up to a point)?

      • bryanlarsen 6 years ago

        IETF standards require prior implementation and testing.

    • jgrahamc 6 years ago

      Not really. Use whatever DoH provider you like. Nothing here says "You can only use Cloudflare".

  • toomuchtodo 6 years ago

    It's not you. It's a combination of startups and incumbent tech behemoths attempting to operate outside of the formalized process for internet standards by using their market power to push for the change they deem appropriate.

    There are benefits (censorship circumvention) to be reaped, but also great peril.

    • dagenix 6 years ago

      I'm very confused - are you saying that the announcement of experimental ESNI support is an example of companies "attempting to operate outside of the formalized process for internet standards"? If so, I really don't see how that is true - they implemented a draft IETF spec - https://datatracker.ietf.org/doc/draft-ietf-tls-esni/ - If that isn't working with a standards body, I'm not sure what is.

    • tptacek 6 years ago

      Funny, I think we've had exactly the opposite problem. See, for instance, Heartbleed, which is pure product of IETF standardization of a feature no mainstream commercial entity asked for.

      • ploxiln 6 years ago

        It was not required to implement the tls heartbeat feature, and IIRC most tls implementations did not implement it - except for openssl. The real problem there was openssl (and it was a big problem, being both the widespread default choice, and too hairy for most competent engineers to bother to dig into, at the same time ...)

        • tptacek 6 years ago

          Again, nobody was asking for the Heartbeat feature. I went back and read through the (IIRC?) tls-wg posts about it. The same is true of extended-random† --- 4 different proposals! None of them were really pushed back on. It was just sort of assumed that if nobody strongly objected, it was going to become part of the standard.

          DNSSEC is another great example. Look around. Nobody in the industry is asking for it (try that "dnssec-name-and-shame.com" site to confirm this), except the IETF and a very short list of companies with a rooting interest, like Cloudflare. In the very short time it's been around. DNS over HTTPS has done more to improve DNS security than 25+ years of DNSSEC standardization ever did. The cart has been dragging the horse here for a long time.

          https://sockpuppet.org/blog/2015/08/04/is-extended-random-ma...

      • toomuchtodo 6 years ago

        I don't disagree there are problems with not involving commercial stakeholders in the standardization process, and your Heartbleed example is poignant. I feel that there is a middle ground that would be more beneficial to all stakeholders in the long run. I'm just asking for some balance. The implementations of today evolve into the legacy systems that will need to be supported and maintained for years, if not decades.

        • tptacek 6 years ago

          Yes, and I think what you're looking at now is balance. The way standards are supposed to work is that companies (among other users) come up with features that they want, and get them working, and then the IETF is supposed to hammer out agreement on how to make those features interoperate. And that's it.

          It was never the idea that IETF was meant to be an Internet legislature adjudicating what features can and can't be supported in protocols. But that's exactly what it has become.

          • tialaramex 6 years ago

            Let's take TLS as an example. Nalini and co. wanted at first to put back RSA in TLS 1.3, they wanted that feature, the TLS Working Group felt that their charter effectively ruled it out. In your opinion was this working group acting as an "Internet legislature" by not having RSA in TLS 1.3?

            Gradually Nalini's lot discovered a very important thing about the IETF: It is not a democracy. They tried sending more and more people, attempting the same thing that made Microsoft's Office into an ISO standard - pack the room with people who vote how you tell them. But there aren't any votes at the IETF, you've just sent lots of people half way around the world to at best get recruited for other work and at worst embarrass themselves and you.

            After they realised that stamping their feet, even if in large numbers, wouldn't get RSA back in TLS 1.3, they came up with an alternate plan for what was invariably named "transparency" (when you have a bad idea, give it a name that sounds like a good idea, see also: most bills before US Congress) but is of course always some means to destroy forward secrecy or to enable some other snooping.

            Now, IMNSHO the Working Group did the right thing here by rejecting these proposals on the basis that (per IETF best practice) "Pervasive Monitoring is an Attack". Was this, again, the "Internet legislature" since Nalini and co. wanted to do it and they'd expected as you've described that if they wanted to do it the IETF should just help them achieve that goal?

            Well if you're sad for Nalini there's a happy ending. The IETF, unlike a legislature, has no power whatsoever to dictate how the Internet works.

            ETSI (a much more conventional standards organisation) took all the exciting "Transparency" work done by Nalini's group and they're now running with it. They haven't finished their protocol yet, but in line with your vision it enables all the features they wanted, re-enables RC4 and CBC and so on. They've published one early draft, but obviously ETSI proceedings (again unlike the IETF) happen behind closed doors.

            You are entirely welcome to ignore TLS 1.3 and "upgrade" to the ETSI proposal instead. Enjoy your "freedom" to do this, I guess?

            • tptacek 6 years ago

              After Heartbleed, a lot of things in the TLS ecosystem got better. CFRG is now chaired by Kenny Paterson, and he and others ran interference for an academically-grounded rebuild of TLS for 1.3. Google beat the living shit out of OpenSSL, and expedited the deployment of 1.3.

              I agree: the 1.3 process is better than what came before it. But it's the exception that proves the rule: the 1.3 process was a reaction to the sclerotic handling of security standards at IETF prior to it.

              My point is simple and, I think, pretty obviously correct: you can't look back over the last 10-15 years of standards group work and assume that either IETF approval or multi-party cooperation within IETF is a marker of quality. And that's as it should be: it's IETF's job to ensure interop, not to referee all protocol design. More people should work outside of the IETF system.

    • zzzcpan 6 years ago

      > There are benefits (censorship circumvention) to be reaped, but also great peril.

      There is no way this encrypted SNI could enable censorship circumvention.

      • toomuchtodo 6 years ago

        My understanding was that encrypted SNI was to avoid the sniffing of the host header that travels unencrypted with SNI, and that in combination with a large host (Cloudflare, Google, AWS), an encrypted SNI prevents censorship (unless you're dropping all traffic to the IP block or AS). Is this understanding inaccurate?

        • vetinari 6 years ago

          If you are state-level actor wanting to censor, the ISPs in your country you will ask politely the large host (Cloudflare, Google, AWS) to cooperate and drop any traffic coming for the censored domain from their IP ranges.

          Only if the large host does not cooperate, the respective ISPs will block their entire IP range, (if they want to keep operating in given jurisdiction).

        • zzzcpan 6 years ago

          This is what was their public reasoning, but it ignores all the research and ideas in censorship circumvention from Tor, Signal, Telegram, why domain fronting didn't work and why collateral freedom doesn't work if there is a corporation to pressure, etc. At this point I believe it was entirely PR if not deception to centralize DNS.

  • drb91 6 years ago

    I hope they add DNS resolution to the network activity tab.

    • lucb1e 6 years ago

      DNS resolution is visible in one of the about: pages, iirc about:networking. But yeah, in dev tools would be much more convenient.

    • cremp 6 years ago

      They can't, because that's handled at the OS level, not the application level.

      If a browser starts (purposefully) subverting the hosts file or not adhering to resolv addresses, then we've got a bigger problem.

      Think, a fat client resolving an address differently than a browser; then that's all sort of Pandora's Box.

      • drb91 6 years ago

        DNS over HTTPS is still handled at the OS level?

        Related, it should be possible to have “correct” dns in userland that behaves as you describe sans falling back to the system resolver. In my understanding the whole point of DNS over https is to avoid the DHCP assigned DNS address (and of course encrypt)

        Finally, I’m pretty sure Firefox at least does its own dns caching. I’ve had to force reload to pick up dns changes already visible to the system resolver.

        • Brybry 6 years ago

          I think the typical way to do DNS over HTTPS is to run a DoH client/DNS proxy and then point your nameservers at localhost.

          I'm not really sure what benefit there is to doing this compared to DNS over TLS with a resolver like Unbound but I suppose that's a different discussion.

          What Firefox seems to be doing, unless I'm mistaken, is running their own resolver that implements DoH/connects to Cloudflare and bypasses OS settings.[1][2]

          I haven't dug into the details yet to see how it interacts with the hosts file.

          It does sound like it falls back to the OS if it fails to resolve with DoH but this solution at first glance appears unideal.

          Wouldn't it be best if Microsoft/Apple/*nix distros/ISPs/third party nameservers used resolvers and nameservers that support DNS over TLS?

          Then end users/administrators could choose who they trust and everything would still be encrypted.

          [1] https://wiki.mozilla.org/Trusted_Recursive_Resolver

          [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1434852

        • cremp 6 years ago

          DoH isn't done by the OS. But that's my point. In order to use DoH, you have to (purposefully) use an extension/browser addon/browser setting.

          As a system admin myself; if user applications started overriding the DHCP DNS that I give them, not only could intranet sites be broken, but I'd start having fights with users about it.

          Edit: Rather, not overriding but querying the DoH instead of the provisioned DHCP DNS. I'm no expert in DoH, or how any of that works under the hood.

          Further, when/if browsers turn on DoH by default, then I can't really fight users, because they did nothing wrong but use a browser. Suddenly, I can't support a browser or two because of it.

          DNS caching by the application is fine, because they made the request to the OS, and got the response. That being said, TTL might be violated by that, since the record has a TTL, and whatever the application cache TTL is.

        • snvzz 6 years ago

          See dnscrypt-proxy. That's what I use on my network.

          It's the default DNS in the network. Computers do not need to know the detail it gets encrypted past that point.

  • sp332 6 years ago

    If you're not using a CDN, you can just enable it for your own site. They explained in the article why they didn't think enough sites would do this to make it worthwhile.

mholt 6 years ago

So what's the plan for when IPv6 gains more adoption and we don't need SNI as much since every site can have its own public IP address (thus making tracking easier, subverting the benefits of encrypted SNI).

Do you think encrypted SNI and NAT will become preferred to using IPv6 for routing because of the privacy benefits of ESNI (either real or imagined, depending on who you trust, since this seems to be relying on centralized CDNs for adoption)?

Edit: I realize ESNI and IPv6 are orthogonal, however, I wanted to ask since I know that lack of IP space and using NAT/SNI are correlated.

  • Someone1234 6 years ago

    You could ignore the eSNI on the web server completely if you wished. But even on HTTP servers serving only a single IP it is still typical to only respond to requests targeting the correct hostname (i.e. direct IP requests are bounced).

    There's numerous reasons to bounce direct (non-hostname) requests including:

    - Discourage users visiting via the IP Address, adding that to favorites, sharing it, etc. Making it difficult/impossible to migrate.

    - Make it harder for databases to associate that IP Address with your site (for user privacy).

    - Security. Ignoring that several modern techniques don't even work with IP Addresses, it also stops someone taking over the IP after you migrate and stealing cookies associated with that address, etc.

    Point being is, that SNI is useful on IPv6. It is useful on single web-site servers. It isn't going anywhere.

    • mehrdadn 6 years ago

      I understood the question wasn't whether SNI is useless, but whether its primary goal of making it difficult to track website connections is being at least partially subverted by IPv6.

    • hueving 6 years ago

      >- Discourage users visiting via the IP Address, adding that to favorites, sharing it, etc. Making it difficult/impossible to migrate

      That's not a reason to reject. That's a reason to issue a redirect. No regular user that would make the mistake of bookmarking an IP instead of a domain is going to know how to use an IP to get there in the first place.

      >- Make it harder for databases to associate that IP Address with your site (for user privacy).

      You're misunderstanding how these crawlers work. They don't just walk all IP addresses because that's a good way to get an abuse letter and because if there is no redirect to a domain they don't get the domain. These crawlers just follow links like any other and log the IPs they resolve to. The only way this helps user privacy is if there are no links to your site anywhere on the internet.

      • zingmars 6 years ago

        >No regular user that would make the mistake of bookmarking an IP instead of a domain is going to know how to use an IP to get there in the first place. System admins probably have caught up to this now, but a decade back I remember non-techy people doing just that. Turns out their sysadmins had blocked some social networks and people found out that you can still access the site using an IP address of their servers. The IP addresses themselves were spread across the office and many people ended up with bookmarked IP addresses. So yes, it could and has happened.

  • jeroenhd 6 years ago

    I think SNI will remain in fashion even after IPv6 is introduced globally. SNI works fine for plain IPv6 address.

    If you run multiple websites on a single server for example you can move your server to another IP address / host / datacenter, update the IP address in your DNS settings and you're done. If you use IPv6 for this purpose, every domain and subdomain need to be configured with a new IP address and need a special line in the DNS config. This leads to administrative overhead, which leads to mistakes.

    • gregmac 6 years ago

      More than that, why bother?

      If I use SNI -- which at this point is fully supported on anything that matters -- I just have to setup all the hosting definitions and make sure my server can find all the certificates.

      If I want to use one IP per site without SNI, now I have to also manually manage the mappings of IP-to-certificate for each host, and also be sure they're all synchronized with DNS.

      More work, more potential for trouble, and no real benefit.

      • maccam94 6 years ago

        Certificates are tied to hostnames, and hostnames resolve to IPs. You could have a server with 50 IPv6 IPs, have a different hostname resolve to each one, and load all of the SSL certs onto the machine along with the appropriate webserver configs. You could then add IPs to the server and DNS without touching the certs. Or replace a cert without changing anything else. You don't need SNI in this scenario, you just have to ensure you add the new IPs to the server configs before updating DNS, so it will serve the correct default certificate.

    • qwertay 6 years ago

      Unless someone writes a CLI tool that automatically sets it up for you.

  • tombrossman 6 years ago

    Interesting that you mention the privacy risk of tracking, as this[0] just appeared in my Twitter feed at about the same time I was reading HN.

    "Tracking Users across the Web via TLS Session Resumption"[1]. A snippet from the abstract: "Our results indicate that with the standard setting of the session resumption lifetime in many current browsers, the average user can be tracked for up to eight days. With a session resumption lifetime of seven days, as recommended upper limit in the draft for TLS version 1.3, 65% of all users in our dataset can be tracked permanently."

    Not exactly looking forward to TLS1.3, it appears to be a move forward in security but with no (or worse) privacy benefits that I've seen so far.

    [0]https://twitter.com/durumcrustulum/status/105293632402455757...

    [1]http://front.math.ucdavis.edu/1810.07304

    • kardos 6 years ago

      > with the standard setting of the session resumption lifetime in many current browsers

      > seven days, as recommended upper limit

      Do we fix this by changing that setting to a few hours?

      Edit: the report discusses this: "The recommended upper limit of the session resumption lifetime in TLS 1.3 [19] of seven days should be reduced to hinder tracking based on this mechanism. We propose an upper lifetime limit of ten minutes based on our empirical observations"

      • yborg 6 years ago

        Is this a configurable option in Firefox?

  • StudentStuff 6 years ago

    I do not think most webservers will be set up with one IPv6 address per website. As is, most IPv6 enabled sites that share an IPv4 with another site and also share their IPv6 address with those same sites. Assigning one IPv6 address per site would add complexity for little benefit.

    • pixl97 6 years ago

      Does Apache or nginx still have issues with serving lots of ip's at once. I know a long, long time ago it could be a performance issue.

      • laumars 6 years ago

        How long ago are we talking? Apache 1.3? That used to have a few nuances but it's now been superseded several times over. I didn't use 2.0 a whole lot but I don't recall it ever being an issue with 2.2 and we're onto yet another engine rewrite with version 2.4 (which has been available a few years now too).

  • hannob 6 years ago

    > So what's the plan for when IPv6 gains more adoption

    Our great great grandchildren will surely figure something out.

    • tomschlick 6 years ago

      Right now > 30% of US traffic uses IPv6 so it's already happening.

  • rqs 6 years ago

    > So what's the plan for when IPv6 gains more adoption and we don't need SNI as much since every site can have its own public IP address

    Say sometimes I love to visit a very private website for my personal pleasure when I'm alone at night.

    Without eSNI, when I type-in pornhub.com and hit enter, my buddy Bob who working for the ISP immediately knows and be very sure that I'm trying to accessing none other than pornhub.com. And then, with great confident, he greedily calling me for a live chat.

    Bob is a ... special person. He might tell my mom about the pleasure thing, but not just that, he also secretly tracks my pleasure activities only to figure out the pattern using some sort of weird thing called machine learning, so he can show up in front of my door at the exact right time to share the pleasure with me.

    I don't like that.

    With eSNI, Bob only knows that I'm accessing 216.18.168.0. But when he tries to access the 216.18.168.0:80, he be greeted by a 403 error which says "Invalid Host".

    A website may have many IP addresses, and an IP address can serve many websites. Because of that, now Bob can only know MAYBE I'm watching my little pleasure, oh wait, or maybe it's imworkingverylateatnight.com? He just can't be sure now.

    • bscphil 6 years ago

      Maybe I'm misunderstanding, but isn't the point of the GP that as IPv6 takes over, eSNI becomes practically useless since it's possible for every site to have its own IP address? If I'm connecting to an IP address that only maps to one site, then Bob is going to be able to figure out what that site is.

      You're right that eSNI is a nice to have (though years late) for IPv4, but I and the GP would like to know what we can do to protect our anonymity with IPv6.

      • kyleomalley 6 years ago

        ESNI doesn't solve for a future where ipv6 takes over and suddenly every site has a huge block of dedicated IPv6s for just that site/fqdn.

        ESNI as it has been developed to essentially require two other components to work properly:

        1) a large scale cdn 2) a trusted dns infrastructure (i.e. DNS-over-HTTPs or DNS-over-TLS).

        So people are absolutely right that in distant future when IPv4 fronted sites go extinct, it may be possible that site hostnames can be correlated to a set of IPv6 address(s). ESNI doesn't and can't solve for that. I imagine that as the internet continues to become more and more centralized, a few large CDNs will host most (or very close to all) internet traffic through a few sets stabilized anycast addresses (thus obfuscating any individual hostname among many hundreds or thousands of other sites as they would all correlate to the same ip blocks).

        That being said, I still don't understand why it's so important to have the SNI on the "outside" of the tunnel. Seems like we should have another layer before the symmetric key exchange where the sni is exchanged on its own.

      • jeffalyanak 6 years ago

        Just because you'll have enough ipv6 addresses for every website doesn't mean you'll want to actually do that.

        It's a lot of extra hassle to set up dozens of IPv6 addresses when (e)sni can do the same job.

        Moreover, (e)sni has an advantage over using ip mapping; events if someone is snooping on your connection and can see that you are connecting to some ip address they won't be able to determine what site that might be.

        If you are simply mapping IPs, they can visit that to see what you are visiting.

      • why_only_15 6 years ago

        I might be misinterpreting this, but on IPV6 do CDNs keep separate addresses for different sites? I suppose it would move things up a protocol level - instead of specifying it in HTTP we can specify it in IP. However, the key issue is CDNs here. In almost no other circumstances do different websites keep the same IPV4 address.

        • rocqua 6 years ago

          > In almost no other circumstances do different websites keep the same IPV4 address.

          I have multiple domains hosted on my personal site. Similarly, facebook.com and facebook.co.uk could very well point to the same IPs.

    • koolba 6 years ago

      If Bob works at your ISP, he could see the DNS queries before you even connect to the site. There’s work arounds for that as well but the average Joe isn’t going to set any of them up.

      • rqs 6 years ago

        Of course. But what if DNS can also be encrypted?

        • still_grokking 6 years ago

          You mean with DoH? Serviced for example from Googles public DNS servers?

          • erinnh 6 years ago

            Im more partial to DNS over TLS myself.

            • why_only_15 6 years ago

              I have a great new idea! DNS over TLS Co-location Open Mesh. That way, we can all be back in the DOTCOM era

  • dewiz 6 years ago

    I'd expect proxies, CDNs, VPNs to become a common setup, although we'd have to trust these service providers not to track our traffic. TOR might be a better option for privacy, doubtful if that will ever succeed on large scale though.

  • ComodoHacker 6 years ago

    After IPv6 adoption it can be made a legal requirement to have unique IPv6 address for every unique domain name.

    • rocqua 6 years ago

      Having AAA records be one-to-one would be a massive change to DNS.

  • fulafel 6 years ago

    The same problem exists with IPv4, and there are many ways to address it: Cloudfare, IPFS, Tor, web proxies, etc.

  • qwertay 6 years ago

    Problem is with IPv6 encrypted SNI won't mean anything because an IP address directly translates to a website so your ISP still knows what website you are on.

  • BobFromDown 6 years ago

    0/10. IPv6 with never be the front runner.

  • otabdeveloper2 6 years ago

    IPv6 will never gain universal adoption.

    Nobody wants their home or their datacenter machines exposed to the whole Internet all the time.

    NAT is a feature, not a bug.

    • foresto 6 years ago

      That side effect of NAT is easily replaced with a simple firewall. Even today, many home routers have this capability already by virtue of running linux. Enabling it would be a pretty simple step for manufacturers.

    • aaronmdjones 6 years ago

      You need state tracking to build a masquerading NAT (or it won't know which machine to route reply packets to), and if you have state tracking, you can also build a stateful firewall, which will achieve the same thing.

      Stateful firewalls still work, have always worked, and work the same in IPv6 as they do in IPv4. Having a public, globally-routable, unique address on your internal machine, whether that's an IPv4 address or an IPv6 one, doesn't mean that anyone can connect to it. It still has to go through your router. That router can be running a stateful firewall.

      NAT is awful.

    • _ikke_ 6 years ago

      NAT is not a firewall, it's a hack to keep ipv4 working today.

  • the8472 6 years ago

    Hosters could play games with IPv6-hopping to make tracking/reverse association more resource-intensive. Their nameservers could even return different addresses for different regions. I doubt many will do that, then again hiding behind SNI never gave you any added anonymity in the first place unless you happend to end up on some shared reverse proxy in the first place.

e1ven 6 years ago

I'm curious why this is tied to DNS-over-HTTPS.

It looks like Cloudflare is including a public key in the DNS lookup, which is used to encrypt the SNI information.

Couldn't this key be stored in a TXT record for normal DNS lookups as well?

  • rickbutton 6 years ago

    If the public key was stored in a TXT record and accessed via regular DNS, then someone snooping the connection could see that you made a DNS lookup for that domain, and could make the reasonable assumption that you were about to make a request to said domain.

    • zzzcpan 6 years ago

      Someone could still make a very reasonable assumption based on IP addresses and response sizes, which is where I believe the primary focus would shift if by some chance this encrypted SNI becomes impossible to circumvent. But also someone could just block DNS-over-HTTPS requests altogether and force Firefox into cleartext DNS and therefore circumvent encrypted SNI.

      It also centralizes DNS requests at Cloudflare's POPs (a company from a mass surveillance, secret orders happy, police country by the way).

      No, none of it addresses privacy and security, probably only makes it worse.

      It's time to admit there is no future for privacy and security without overlay networks.

      • codetrotter 6 years ago

        > someone could just block DNS-over-HTTPS requests altogether

        If you are going to block DoH you’ll need to block all HTTPS traffic altogether, don’t you? I mean unless you are just blocking traffic to some list of known DoH providers.

        • zzzcpan 6 years ago

          DoH must be bootstrapped somehow to get IP addresses to make requests to. This is where it can be blocked. Either blocking DNS query or a known IP address or just an IP address with suspiciously tiny responses that responds to active probing requests as a DoH server. It's very hard to hide that fact, you need to actively fight those blocking attempts. And if it's done by some government, corporations just bend over backwards to help censorship, like they did in cases of Signal and Telegram recently for example.

          • zamadatix 6 years ago

            >This is where it can be blocked. Either blocking DNS query or a known IP address or just an IP address with suspiciously tiny responses that responds to active probing requests as a DoH server.

            This works great in theory but if it takes off then Google and Cloudflare can simply decide to serve DNSoHTTPS requests over their existing service IP space and you're left with the choice of block the internet or allow encrypted DNS lookups.

            As far as government comments you're never going to deploy public infrastructure inside a state and be able to avoid the state so it's pointless to bring anything about that up.

    • erinnh 6 years ago

      Could also use DoT - DNS over TLS.

      Otherwise, this sounds suspiciously a lot like DANE, which cert authorities hate, since there would be no use for them.

      https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...

      • tptacek 6 years ago

        Except for DNS standards aficionados, pretty much everyone hates DANE, including people who hate CAs. Here's a level-headed take:

        https://www.imperialviolet.org/2015/01/17/notdane.html

        • tialaramex 6 years ago

          One thing to pay attention to in Langley's post about DANE is that he says they can't do this reliably without a way to do DNS that doesn't break when you do anything more interesting than A lookups.

          This thread is about eSNI. Guess what, eSNI can't be done reliably without a way to do DNS that doesn't break when you do anything more interesting than A lookups.

          Fortunately, Firefox has a solution for that, DoH.

          Wait, which of those two identical problems does it solve? Oh right, both of them.

          • tptacek 6 years ago

            DoH drastically reduces the impetus for the deployment of DNSSEC; it is essentially the 2018 answer to DNSCurve/DNSCrypt. Google and the Chrome team have been pretty clear about what they think about DANE's prospects moving forward.

            And, of course, you're misrepresenting Langley's blog post when you suggest that the only reason DANE isn't in Chrome is because of lookup reliability. Readers can just read the piece for themselves (it's good, and interesting!) and come to their own conclusions.

            • dagenix 6 years ago

              And definitely read the post by Thomas Ptacek linked to from that article: https://sockpuppet.org/blog/2015/01/15/against-dnssec/. He makes the excellent point that DNSSEC (and thus DANE) doesn't get rid of CAs at all - it just makes whoever controls the domain into a defacto CA. Yeah, Comodo behaved badly as a CA - so, the browsers are in the process of no longer trusting it; imagine if DANE were in widespread use and Verisign behaved badly - the browsers really couldn't do anything about it at all unless they wanted to stop supporting .com - which is impossible.

              "Let's get rid of CAs!" Sounds great. "Let's replace the CAs with a less accountable set of companies and governments that are harder to punish for bad behavior" doesn't sound so great. But thats what DANE is.

              • rocqua 6 years ago

                > "Let's get rid of CAs!" Sounds great. "Let's replace the CAs with a less accountable set of companies and governments that are harder to punish for bad behavior" doesn't sound so great. But thats what DANE is.

                The common counter-point is that, because nearly every CA does domain validation, the owners of the DNS keys are already capable of getting arbitrary certs. Therefore, DANE does not give DNS key owners more power; all it does is take out the CA's as a potential failure point. And really, as long as a cert attests that "You are talking to the owner of this domain" the power over certs is always going to lie with those who control the DNS system.

                A possible response is that just taking power from the CA's but leaving power with the DNS key owners is not good enough. This does make some sense. It is not entirely clear to me how we will take this power from the DNS system though. The best bet are CT-logs, which will allow after-the-fact detection of any falsely issued certs. Notably though, attribution between CA's and the DNS system isn't solved here. Perhaps if CA's store the relevant signed DNS response, we could attribute the attack to the DNS system.

              • tialaramex 6 years ago

                Thomas Ptacek (the guy whose blog post you've linked) agrees with Thomas Ptacek (tptacek, the guy whose sub-thread you're replying to)? Not exactly a revelation.

                Also Thomas has rejected the suggestion that the parts of his post that are now hopelessly wrong should be mentioned in the FAQ he prominently links. So, that post is wrong and explicitly won't be fixed, you should not rely on the "facts" in it unless you want to get laughed at.

                Your mention of Comodo suggests you're badly confused. The Symantec hierarchy is in the process of being distrusted by the Mozilla and Google root programmes, not Comodo.

                As to .com, it already _is_ run very badly and we already do have to put up with that because there is no way to fix it. Don't put new things in .com unless you're comfortable with for-profit companies screwing you over whenever it suits them. DNSSEC can't make that worse, it's already terrible.

                That's worth emphasising - DNSSEC cannot make you more dependent on your registry operators, because you are already entirely dependent on those registry operators anyway. If the operator could be leaned on by spooks (seems plausible) that is already true today.

                • dagenix 6 years ago

                  > Thomas Ptacek (the guy whose blog post you've linked) agrees with Thomas Ptacek (tptacek, the guy whose sub-thread you're replying to)? Not exactly a revelation.

                  Lol, I didn't read the username.

                  > So, that post is wrong and explicitly won't be fixed, you should not rely on the "facts" in it unless you want to get laughed at.

                  I haven't analyzed everything in that blog post. However, the case it makes against DANE I think is convincing. I linked to that blog post since it was the one that made me realized that DANE was a bad idea when I was briefly a DANE enthusiast a few years ago.

                  > Your mention of Comodo suggests you're badly confused.

                  I accidentally slipped and used the wrong CA - I think "badly confused" is a bit strong for a slip of the tounge.

                  > Don't put new things in .com unless you're comfortable with for-profit companies screwing you over whenever it suits them.

                  It sound like you are also arguing that DANE is a bad idea.

                  > DNSSEC can't make that worse, it's already terrible.

                  I don't think I said anything to the contrary.

                  I'm real confused by the aggressive tone - it seems like you agree with everything of substance I wrote and the things you don't agree with are things that I didn't actually say.

                • tptacek 6 years ago

                  You keep alluding to parts of that post that are outdated, but you never provide specifics.

                  I appreciate and will remember the concession that security for .COM is hopeless.

                  • tialaramex 6 years ago

                    Actually I keep providing specifics which you ignore, for example the blog post declares that DNSSEC is "unsafe" because it can cause hostnames to be revealed and you give the example of Bank of America, owners of the bankofamerica.com domain for which you say such a policy "does not work so well".

                    But today actually FQDNs like 14021-nonprod.bankofamerica.com or whkgm04ye.hktskcy.apac.bankofamerica.com are not just accessible if you brute force DNS, they're automatically published, because Bank of America heavily relies, in fact, on the Web PKI and its issuers log the certificates.

                    It seems very strange to focus on the security for .COM when my point is that that entire TLD is badly run, it's like you're focused on how good the lock is on the front door (somebody call Deviant Ollam) at Lehman Brothers when the actual problem was they've invested all this money in worthless mortgage securities.

                    • tptacek 6 years ago

                      That's simply false. Bank of America's public servers with TLS certificates are logged (which, by the way, is also an operational security problem for them), but their other services are not.

                      You've picked an odd point to quibble with, since there's not only NSEC and NSEC3 but now, after the last RWC, a proposed NSEC5 to address this supposed non-problem.

    • gpm 6 years ago

      Not if that someone can intercept traffic from my computer -> public site and can't intercept my computer -> dns server.

      And since DNS queries are commonly cached locally (i.e. dns server == my computer a reasonable percentage of the time) that's not even a rare occurrence.

    • markovbot 6 years ago

      Not if you use DNS-over-HTTPS, as is required to turn this on in Firefox

      • rickbutton 6 years ago

        right, DNS-over-HTTPS solves this particular problem.

  • the8472 6 years ago

    Are there even any DoH resolvers you can run yourself? unbound only supports DoT. It's a shame that they had to invent yet another standard.

    This makes things a lot more complex to run in a lan. Especially since you also need certificates and put them into the browser trust store. Just to avoid leaking SNI.

    • zamadatix 6 years ago

      The problem was DoTLS was too easily blocked, even by accident with networks that only allowed their DNS server on 53, HTTP on 80, and HTTPS on 443. Easiest way to fix it without giving network administrators a choice unless they control both the devices and the network was to make it ride over HTTPs. Unfortunately I think that means DoTLS is likely to become unused even though it is technically a "cleaner" protocol implementation.

      As far as resolvers you can run yourself everyone and their brother has made one - facebook, cloudflare, cordens, dnscrypt, your cousin's brother's sister on Github. Literally all there is to it is rewriting well formed datagrams into well formed JSON and back.

      • vetinari 6 years ago

        There is no practical difference between blocking port 853 and blocking port 443 to well-known IPs. Each client that wants to use DNS using any protocol will have to contact it by IP first, and that IP has to be provisioned somehow. The same mechanism used for provisioning can be used for blocking.

        • zamadatix 6 years ago

          > There is no practical difference between blocking port 853 and blocking port 443 to well-known IPs.

          I'm going to split this up into two answers:

          1) The problem for "coffee shop" guest style networks isn't so much that they care to purposefully block this type of DNS as much as it's already blocked by accident and it isn't going to get unblocked any time soon since (realistically) nobody manages these networks post deployment. At least until 8 years later when something breaks or it becomes to unreliable and then is replaced by the next in-a-box solution they bought for 80 dollars.

          2) This assumption of blocking works somewhat decently until Google or Cloudflare just decide to enable DNSoHTTPS on their main service IPs. Then you have the choice of blocking most of the internet or not spying on your customer's data. Now as a business if you own the devices and the network you can manage your devices appropriately with an explicit proxy or SSL intercept but general ISP/guest wifi tracking becomes an order of magnitude more difficult to do cost effectively.

          • vetinari 6 years ago

            I'm concerned mostly with 2, but in small businesses category. Many of them do not have proxy, SSL intercept or MDM (they manage their computers using plain old AD, plus some BYOB devices), and having to get one significantly ups the ante for them.

            • dagenix 6 years ago

              > having to get one significantly ups the ante for them.

              I see this as a good thing. I do sympathize with small businesses that, when gaping holes in internet security are closed, are forced to retire technologies that exploited those holes. And I'm not being sarcastic about being sympathetic - businesses in this position would probably rather be spending their money on something else. But, these are security issues that impact everyone and leaving them unfixed just because it will negatively impact a group isn't a reasonable option.

            • zamadatix 6 years ago

              They've always had to get one if they wanted the security/monitoring, this just makes an implementation of how to get around lazy security a standard.

              • vetinari 6 years ago

                However, neither of them is fully transparent or autoconfigurable, especially for users that roam among networks. Local DNS resolver, while not 100% bulletproof, did the job acceptably.

        • yjftsjthsd-h 6 years ago

          So preload the list of IPs in the OS.

          • vetinari 6 years ago

            That's the easy case, if they are preloaded in the OS, they are well-known by definition and can be blocked statically.

            The dynamically discovered ones are the next step in the game, but you still have to start somewhere, either with well-known IP or at least legacy DNS.

markovbot 6 years ago

This sounds great. Does anyone know if any web servers are planning on implementing it?

  • duskwuff 6 years ago

    ESNI is primarily a feature of the SSL implementation, not of the web server. As far as I'm aware, it's not in OpenSSL yet, but is likely to be added once TLS 1.3 has been finalized.

  • erinnh 6 years ago

    I have heard Googles boringssl already supports it, but I have found no actual evidence of that.

    Cloudflare apparently does support it already for all their websites though. (or at least the ones that use cloudflare dns also)

    https://blog.cloudflare.com/encrypted-sni/

nykolasz 6 years ago

I would love to understand why Firefox keep adding support for CloudFlare specific features.

  • dagenix 6 years ago

    Thats not remotely true. ESNI is a draft IETF spec - https://datatracker.ietf.org/doc/draft-ietf-tls-esni/. It just so happens, that right now CloudFlare and Firefox are the ones that implement it. But any particular feature, regardless of how great it is or how well specified by a standards body, must have a first implementation by someone. And it's really not that shocking that a group like CloudFlare wants to be at the forefront of new web technologies AND also has the resources to pay for it. What does boggle the mind that is that everyone freaks out when a draft IETF standard is implemented. What do people want? For it to spring into existence fully formed implemented by all browsers, operating systems, DNS software and providers, etc all that once? That would be ridiculous - and worse, if there aren't a few experimental implementations to work out the issues, even if that could happen, what would be implemented would probably have significant issues that we'd then be stuck with forever.

  • jgrahamc 6 years ago

    This isn't a Cloudflare-specific feature. It's an IETF draft standard: https://tools.ietf.org/html/draft-rescorla-tls-esni-00 Just as we've done with other standards in progress (QUIC, TLS 1.3) we've implemented on our network. That helps get the standard tested and adopted quickly. Literally, anyone can implement that standard, there's nothing "Cloudflare" about it.

ex3ndr 6 years ago

Does this mean that this change will add yet another roundtrip?

comesee 6 years ago

As a cancer survivor, using the example of someone spying on your cancer.org visit as a motivation for encrypted SNI seems a bit excessive and insensitive. There are definitely more neutral ways of motivating eSNI than invoking the fear of a stranger finding out you or a loved one has cancer. Shame on Mozilla.

  • rocqua 6 years ago

    Cancer is something you may want to keep secret without being shameful. In privacy discussions, such examples are rare.

    Yet, those examples are useful because they prevent people dismissing the arguments because `people shouldn't be doing that anyway`. That is, no-one wants to keep people from going to cancer-related websites. This is very different from e.g. porn or STDs. I guess perhaps the exception are the nut-cases that believe people getting cancer `deserve it` because otherwise it wouldn't be part of gods plan. But luckily very few people consider those opinions relevant.

  • mort96 6 years ago

    I get you, but I suppose the most obvious alternative examples are things like porn or illegal sites, which they might not have wanted to use. What alternative examples would you have preferred?

    Apple's contrived example from when they introduced private mode in Safari was shopping for presents and not wanting the recipient to find out, but that would be even less convincing when the person you're hiding your traffic from is the person next to you in the coffee shop.

    • comesee 6 years ago

      The coffee shop being able to know all the domains you're visiting without your consent is already bad enough. I don't think they need to draw out a specific example. If the idea of the coffee shop knowing all the websites we visit is bad, we'll come up with our own examples, and that's something everyone can do. Almost no one researches cancer online.

  • dagenix 6 years ago

    That example does make a very strong case for the feature - which I guess was the point. But, the audience for the post already knows that encrypted SNI is and why it's important. And if they don't, a much more light-hearted example would do. And it's not like there is much of an anti-encrypted-SNI movement that warrants a powerful response.

    So, yeah, I'd agree it seemed both jarring and unnecessary.

    • comesee 6 years ago

      Yeah not sure why I'm being downvoted. Imagine if the example was HIV.org or ebola.org, totally doesn't seem necessary.

      • dang 6 years ago

        Probably because of the "Shame on Mozilla" bit, which is a trope of the online callout/shaming culture, which we're trying to avoid here. The rest of your comment looks fine to me, and you're obviously speaking from hard experience, which makes comments much more substantive.

      • Simon_says 6 years ago

        You're being downvoted because your own statement contradicts itself. You're so touchy about the cancer thing that you're complaining about it.

        • dagenix 6 years ago

          > touchy about the cancer thing

          Wow. That is technically a valid English phrase. What boggles the mind is that someone could be so out of touch with societal norms and basic human decency that they would actually use it.

        • comesee 6 years ago

          I'm touchy about the cancer thing because I had my leg amputated and went through chemo for a year.

          • Simon_says 5 years ago

            Getting agitated over a hypothetical example is a bit of an overreaction.

            • dagenix 5 years ago

              I don't know comesee - but their saying it isn't at all hypothetical for them. It's called basic human decency - maybe give it a try?