Sephr 5 years ago

Using HTTP for apt may seem bad, but you should really pay attention to Ubuntu itself: https://github.com/canonical-websites/www.ubuntu.com/issues/...

Ubuntu ISOs aren't served securely and are trivially easy to MitM attack.

This vulnerability is still being exploited: https://www.bleepingcomputer.com/news/security/turkish-isp-s...

  • vvanders 5 years ago

    Wow, that's pretty stunning and I'm honestly quite shocked it's still the case in '19.

    Just downloaded Ubuntu server for some local instances here at home and realized that I hit this path without even knowning.

    [edit] Looks like ISO signatures are served over HTTP as well.

    • blub 5 years ago

      It's one of my pet peeves and I can say that a ton of organisations do this in 2019, including ones that should know better, like Ubuntu.

      The only secure way to install is to somehow get their key, already have gpg installed and then verify that way.

      • whydoyoucare 5 years ago

        As long as verification happens independently, and the keys are obtained from trusted sources, there is nothing wrong in downloading Ubuntu over http.

        It appears this to be the case, see "How to verify Ubuntu download" tutorial that provides detailed steps: https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify...

        (If you plan to install an Operating System, then I believe some homework is in order -- you cannot expect the OS developers to spoon-feed you trivial security aspects that are expected to be a skill-set that you, the operating system installer, or System Administrator, do possess).

        • stouset 5 years ago

          > If you plan to install an Operating System, then I believe some homework is in order -- you cannot expect the OS developers to spoon-feed you trivial security aspects that are expected to be a skill-set that you, the operating system installer, or System Administrator, do possess

          If you plan to publish an operating system, you should take effort to reduce this homework as much as is humanly possible. For every step where a user is expected to—but not explicitly required to—perform extra work to ensure their own security, the overwhelming majority of users will not take these extra steps.

          Arguing that they "should" gets us nowhere.

          • SmellyGeekBoy 5 years ago

            Indeed, and adding these extra verification steps just feeds the "Linux is too complicated for ordinary people" crowd.

        • vvanders 5 years ago

          > and the keys are obtained from trusted sources

          Isn't that a little hard if the verification keys(from that link) are served over HTTP[1][2]?

          [1] http://releases.ubuntu.com/18.04/SHA256SUMS.gpg

          [2] http://releases.ubuntu.com/?_ga=2.21478331.1690774166.154812...

          • bigiain 5 years ago

            Those are GPG signed SHA hashes. Sure anyone MITMing the http connection could change them, but for them too be valid signatures for the theoretically-modified ISO you've ended up with, they either need the correct GPG private key to generate that new signature, or they need to convince you to accept a bogus public key for ubuntu-archive-keyring.gpg

            • vvanders 5 years ago

              Or they could serve all this over HTTPS where that process would happen automatically when you download.

              As another poster noted the keyserver in that link will work over http so you're still susceptible to a possible downgrade attack.

              Given how easy it is to get a LE cert these days it's making me question what other security decision Ubuntu is making that I'm not aware of or have the expertise to evaluate.

              • bigiain 5 years ago

                Heh. I think that "other poster" may also have been me...

                With regards to LE certs, I discussed elsewhere here the problems faced by the apt developers that make the obvious "just install a LE cert!" answer not actually workable for them - they rely heavily of volunteer run mirrors and local proxies. I don't know what Ubuntu's ISO download "server" really is underneath, it's possible they've got similar problems, where either they pay for all the bandwidth themselves, or they let people assume that an ssl secured connection to "ubuntu-mirror.usyd.edu.au" or "ubuntu-mirror.evilhacker.ru" is for some reason "safer" than an http connection...

          • whydoyoucare 5 years ago

            The Ubuntu verification tutorial (see my previous post) points to keyserver.ubuntu.com.

            • bigiain 5 years ago

              Hmmm, I just had a look there.

              keyserver.ubuntu.com is available over http as well as https, which means it's probably susceptible to a downgrade attack if you're mitming it - which would let you serve a bogus public key and therefore generate apparently valid signatures for the hash for a modified ISO.

              That's not good...

        • yjftsjthsd-h 5 years ago

          > As long as verification happens independently, and the keys are obtained from trusted sources, there is nothing wrong in downloading Ubuntu over http.

          In the abstract, this is true. In practice, however, the checksums are always downloaded from the same page as the OS and usually over the same (unencrypted) connection from the same servers.

          • bigiain 5 years ago

            Yeah, but the checksums are GPG signed, so you can't change the ISO and then create a valid signature for the new checksum without also having the GPG private key matching the public key you use to check the signature.

            Having said that, it seems keyserver.ubuntu.com is happy enough to allow connections via http instead of https, so there's a valid avenue to serve up a bogus public key...

            That's a much smaller window of opportunity since if you're already a Ubuntu user you'll have a pre-existing copy of ubuntu-archive-keyring.gpg already, rather than trying to download a possibly mitm-ed public key at the same time as you download the ISO. But I must admit I boggled a little bit when I saw that keyserver.ubuntu.com happily server their public key over http instead of just https...

    • bigiain 5 years ago

      > Looks like ISO signatures are served over HTTP as well.

      That's less of a problem than it sounds.

      Sure you can MITM and change the sig on the fly, but without the private key you cannot generate a valid sig for a modified IOS. (And if Ubuntu have had that private key stolen, there are much much deeper problems than the ISOs being served over http...)

      On the other hand, I suspect it's probably fewer than single digit percentages of people who download the ISOs who then jump through the GPG hoops to check the signature is valid. And as with all PGP/GPG keys, you've got the bootstrapping problem of how do you know that your copy of ubuntu-archive-keyring.gpg is real to start with... (I've been to key signing parties, but not in the last ~30 years...)

      • cryptonector 5 years ago

        But you need a trust path. If this is the first time you've heard of the Ubuntu signing keys and you have no trust path from them to a signer you trust, then you have a problem, so, yes, all of this should be served over HTTPS. The WebPKI is hardly great, but it's a lot better than nothing.

    • NicoJuicy 5 years ago

      It seems the last days some people are actively looking for http downloads.

      Videolan and apt/get now. Here is why videolan doesn't do it: https://www.beauzee.fr/2017/07/04/videolan-and-https/

      TLDR; They can't force HTTPS to 3rd parties, which is why they can't do it. It's not as simple as running LE.

      • jolmg 5 years ago

        I imagine that every distribution system that relies on mirrors has a similar setup. It makes a lot of sense.

  • dessant 5 years ago

    I usually download it from the kernel.org mirror for this reason. https://mirrors.kernel.org/ubuntu-releases/18.10/

    • whydoyoucare 5 years ago

      How do you know they obtain it from a trusted source? ;-)

      • dessant 5 years ago

        I assume that they check the signatures using keys that they've exchanged in person by booping noses.

        • bigiain 5 years ago

          You're happy to just assume someone else checks the .sig? Why don't you do it yourself?

          • dessant 5 years ago

            I also do it myself. Boop

  • nyolfen 5 years ago

    use torrents

    • Sephr 5 years ago

      The official torrent links are also distributed over insecure HTTP and are also trivially easy to MitM attack.

      • nneonneo 5 years ago

        Ugh couldn’t they just put magnet links on their nice HTTPS site? It sounds like they just really don’t want to have to update their site when new releases come out.

        • groestl 5 years ago

          Or... we just found the governmentally sanctioned attack vector?

        • web007 5 years ago

          Per the current image / torrent: magnet:?xt=urn:btih:a4104a9d2f5615601c429fe8bab8177c47c05c84&dn=ubuntu-18.04.1.0-live-server-amd64.iso

rabi_penguin 5 years ago

Hmm, it's almost as if the author of https://whydoesaptnotusehttps.com/ may have overlooked a few things.

  • all_blue_chucks 5 years ago

    That's because security requires defense in depth. If the failure of a single security control can invalidate your security model, your security model is inadequate.

    It should require multiple things to go wrong for catastrophic failure. This is a lesson from engineering that hasn't made its way to software development yet (outside of security engineering, anyway).

    • whydoyoucare 5 years ago

      I believe several terms here need a context, without which they are meaningless. For example, what is an acceptable security model to download and install an OS, how do you exactly define "defense in depth" for the act of downloading and installing an OS, etc.. That will help us define what controls should be in place for the overall OS-download-and-install experience to be secure.

      • all_blue_chucks 5 years ago

        Defense in depth is just an industry term for redundant security. For example, you can mitigate tampering with data transfers by signing the data itself AND ALSO by signing the channel it's transferred over with TLS. If a flaw is found in one of those methods, the other will still protect you.

        The process of listing all the security failure points and documenting the redundant mechanisms to protect them is called threat modeling.

        For a system that installs OS-level binaries as root, it would absolutely be appropriate to threat model it and hold it to a defense in depth standard. In defense systems, they often require three levels of defense in depth, the last being an air gap network.

        • whydoyoucare 5 years ago

          Yes, this works. In theory. In practice, a simple model is needed that everyone can follow and implement consistently. That, does not exist.

          Lookup "threat modeling" and you will see how abstract a notion it is (even your comment calls for a "redundant mechanism" that may not be exactly what you are looking for), and how little information is available. End result? Most do it for the "checkbox effect". Don't get me wrong, I am not trying to obliterate what you said, just putting some factual data around it.

          • all_blue_chucks 5 years ago

            You're right that it's not simple. In fact, studying security-senstivie ways in which software tends to fail and how those failures can be mitigated is an entire field unto itself. Software developers can't be expected to get it right on their own. That's why all major software companies have security engineers on staff.

            Open source projects, unfortunately, rarely have such contributors. Probably because building stuff is more fun than threat modeling (which can be quite tedious to do properly).

    • blattimwind 5 years ago

      One really obvious thing that occurs to me is that apt uses separate processes for network I/O but fails to reduce their privileges.

    • justinclift 5 years ago

      > If the failure of a single security control can invalidate your security model, your security model is inadequate.

      As an example, when an admin gets an AWS Security Group wrong, thereby exposing database servers / redis / customer data. Consequence... multimillion $ fines, brand/reputation damage.

      It's kind of sad how badly things are set up to fail sometimes. :(

  • moviuro 5 years ago

    OTOH, they would have been right if there had been (yet another) bug in openssl/whatever lib would be used for https.

    FWIW: 16 vulns in apt in NVD [0]; but 202 for openssl [1]

    [0] https://nvd.nist.gov/vuln/search/results?form_type=Advanced&...

    [1] https://nvd.nist.gov/vuln/search/results?form_type=Advanced&...

    • Ajedi32 5 years ago

      APT already supports HTTPS. Enforcing it by default wouldn't increase APT's attack surface significantly.

      • loeg 5 years ago

        It would decrease the number/quantity/capacity of available mirrors. I don't know if that quantity would be significant.

        • Ajedi32 5 years ago

          You would lose the ability to do transparent caching which I agree is rather annoying, but I think most environments where that sort of caching occurs (mostly corporate and school networks) also have the means to explicitly configure client machines to use an internal caching mirror.

          • joombaga 5 years ago

            Those environments tend to MITM https traffic as well. At least the companies I've worked for can.

            • opless 5 years ago

              Those companies really need to learn what privacy means.

          • opless 5 years ago

            You could run a caching mirror for such things, such as artifactory for example.

    • raesene9 5 years ago

      OpenSSL is not the only available implementation of SSL.

      • detaro 5 years ago

        Realistically, it's the one that's gonna be used in apt though. (E.g. the existing HTTPS transport uses curl, which uses OpenSSL)

        • jwilk 5 years ago

          libcurl has many TLS backends. APT before 1.6 actually uses the GnuTLS one.

          APT ≥ 1.6 doesn't use libcurl; it uses GnuTLS directly.

          • detaro 5 years ago

            I stand corrected, thx.

        • raesene9 5 years ago

          Sure, my point was more that if it's a serious concern to use OpenSSL (Personally I don't think it is, as a lot of work has been done in recent years to improve that codebase), that other options are available.

    • skywhopper 5 years ago

      So your argument is that bugs in OpenSSL (a necessarily very complex piece of software) mean that using SSL to increase network security is a bad thing?

      • geofft 5 years ago

        No, their argument is that bugs in OpenSSL mean that using SSL when it does not actually increase network security is a bad thing. Reduce your attack surface. Use the things you need to use; don't use things you don't need to use.

        Whether apt using OpenSSL would in fact increase network security is a separate and debatable question, but the argument as stated assumes it would not, and is sound.

        • zaarn 5 years ago

          Defense in depth is a thing.

          SSL provides some security guarantees.

          Using signed package databases also provide some security guarantees.

          Both may overlap in what security they provide.

          BUT!

          If one fails, the other can continue to provide a subset of the previously available guarantees.

          • moviuro 5 years ago

            Not in the case of OpenSSL, no. (Some) OpenSSL issues, just as (some) apt issues, end with RCE. Game over.

            Priv-sep, correctly handling untrusted files (e.g. 1. check signature, then 2. execute whatever; not the other way round), memory-safe languages, etc. would be more welcome additions.

            • blattimwind 5 years ago

              > Priv-sep

              Apt even has the had part already implemented by separating the network I/O in other processes. Only problem is that those currently write directly to system directories, but that can be fixed.

            • tracker1 5 years ago

              And overcoming both is much harder than overcoming either on their own.

              • moviuro 5 years ago

                In the worst case, you only need to overcome one. And you ~double your attack surface.

                • tracker1 5 years ago

                  Which one? If you gain control of a server, you'd still have to overcome signing... and you only control that single server, not all replicas (for a bug like in TFA).

                  Could you describe a way to have double the attack surface that would effect the majority of peer servers?

      • moviuro 5 years ago

        No. My argument is that both arguments do exist and security is all about middle-ground. Do we add (yet another) layer of (bug-riddled) software to defeat one possible sort of exploits, or not? How much does it cost? (Time, money, etc.)

        Other options include: not handle http directly in the package manager but use a known-good library (curl?); do priv-sep; not having the package manager execute code from a file before it checked its authenticity...

        My personal opinion on the topic is that authenticating servers is a good thing to do (https does this; encryption is an added benefit), but time has shown that https is broken: libs are full of holes; the CA model is broken by design. Maybe share updates using ssh?

        • nneonneo 5 years ago

          Oh for god’s sake will you stop spreading nonsense FUD about https? https is not “broken”. The majority of websites run on https now; most system update mechanisms with the VERY notable exception of apt use https to serve updates.

          Sure, there are bugs in libraries, but seeing as https is already widespread you’re not exposing yourself to MORE risk by using https over plain http, and you mitigate attacks like this post. Any random coffee shop, untrusted public WiFi, or attacker with a Pineapple could have used this attack to MITM HTTP apt, whereas the attacker would have to compromise an upstream mirror to pull off the same attack over HTTPS.

          And re: the CA model, if you’re THAT worried about compromised or fake certs, then pin the cert for a root server like debian.org, then download PGP-signed cert bundles for mirrors and enforce certificate pinning using those bundles only. Done. Apple and Microsoft use cert pinning for their update systems (IIRC).

          • whydoyoucare 5 years ago

            CA-model requires "extra" investment, that not many are willing to make. Plus there is a secure way to distribute an OS, which does not require https [download over http, obtain signed check-sums from a trusted source, verify, and proceed with installation].

            • cryptonector 5 years ago

              Letsencrypt.org.

              Stop it, just stop it.

          • moviuro 5 years ago

            > The majority of websites run on https now

            We are talking about a specific use case of https: software repositories, which are far higher-value targets than your random website, with another set of challenges. Your package manager can actually do some things as root; once it's owned, your system is Game Over.

            Adding yet another lib on top of the (?) most important piece of software on your computer is not a risk to take lightly. There are more elegant solutions (signatures, priv-sep, not trusting anything until authenticated, etc.) that require less risky code to run, and fewer people to come into play.

            > Sure, there are bugs in libraries, but seeing as https is already widespread you’re not exposing yourself to MORE risk by using https over plain http

            Irrelevant. We're talking about instant game over if it goes to sh.. even if just once. More attack surface = more vulnerabilities.

            • nneonneo 5 years ago

              If you're that paranoid about OpenSSL, then just sandbox it. Throw the entire `apt-transport-https` subprocess in an unprivileged context. Done.

            • viraptor 5 years ago

              With https libs you're trading one potential issue with another class of issues. Https implementation may have an rce but (this happens extremely infrequently and can be patched quickly). At the same time, it prevents the whole class of issues of MitM, whatever element of the underlying system they would target. (Which potentially need client changes to multiple elements to get fixed) This is a pretty easy decision to make.

        • jmvoodoo 5 years ago

          I think the point is that adding https over http for the current system would always improve security. At it's most broken, https is at least as secure as http and therefore wouldn't reduce the security of the overall system. It adds one more hurdle for an attacker to clear.

          Similarly, the apt team ignoring a bug like this "because it's protected by https anyway." Is an invalid argument.

          • moviuro 5 years ago

            > adding https over http for the current system would always improve security

            No.

            If an attacker can inject packets that break your SSL lib, but wouldn't have broken your package manager, you added a vuln.

            Example: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2003-0545

            • jmvoodoo 5 years ago

              Fair. There are specific library attacks that could result in RCE. However that is also true against curl, ssh, and for that matter could be introduced into your http library at some point. The question then becomes what library do you trust most? OpenSSL is attacked and tested constantly. Things have been found (in your example in 2003!). They have been fixed. Apt can choose to stand on it's shoulders, or go through the entire process themselves by putting together a patchwork of their own solutions that will no doubt get less testing by whitehats and be a juicy target for blackhats.

            • giovannibajo1 5 years ago

              Even ignoring the fact that there are far better libraries than OpenSSL today (eg: BoringSSL), apt already implements a sandbox-like approach (as the article explains); I'm not sure if the subprocesses are actually sandboxed, but obviously they should and at that point, a vulnerability like the one you cited shouldn't let the attacker escape the very narrow sandbox.

    • jwilk 5 years ago

      How many of these would result in RCE?

      • ryanlol 5 years ago

        Hard to give an exact amount but there have been loads of potentially exploitable memory corruption issues.

      • dijit 5 years ago
        • detaro 5 years ago

          Fulltext search for "rce", which finds "resou_rce_", "sou_rce_", does not give a number of RCE vulnerabilities.

          • dijit 5 years ago

            Except that "EXACT MATCH" is enabled, try yourself.

            (It should be noted that it /does/ match on "possible RCE", which buffer overflows are often tagged with.)

            • detaro 5 years ago

              I have. Search for "ontgome" and it finds the bugs containing "Montgomery" (I have taken your url and just replaced the search word):

              https://nvd.nist.gov/vuln/search/results?form_type=Advanced&...

              I'm not saying none of the results from your search are RCEs, but not all are, and many are fairly speculative.

              • dijit 5 years ago

                Argh, that's frustrating, I checked 4 of them and thought it was fine.

                The problem is that there seem to be many classifications of remote code execution including buffer overflow and "code injection" and you can't choose multiple. :(

                • detaro 5 years ago

                  Yes, I also was surprised that the search didn't have more useful tools (e.g. search by high ranking in individual factors, or even just sort by severity: confirmed RCEs should all by very high)

          • da_chicken 5 years ago

            As far as I can tell, NIST doesn't directly directly use the term RCE. CVE-2010-5298 is an OpenSSL vulnerability that allows data injection that could potentially result in code execution, but there's no easy way to see that from NIST's categorization:

            https://nvd.nist.gov/vuln/detail/CVE-2010-5298

            • detaro 5 years ago

              that's an additional problem with relying on text search.

  • kerng 5 years ago

    Makes me chuckle, since my comment on the HN post yesterday that highlighted that someone will be bitten if security principles are ignored got downvoted.

    • ru999gol 5 years ago

      I was laughing as well, software people (the HN and open source community in particular) are so stupidly opinionated sometimes, it makes me just cringe whenever its impossible to ignore their dribble.

      There really is no excuse not to use HTTPS in 2019, period.

    • black_holed 5 years ago

      Well, they didn’t fall into the black hole of “moderation” at least...

  • snek 5 years ago

    I will never understand why people are so proud to have gotten rid of "needless" security. It always ends the same way...

  • spuz 5 years ago

    I think I understand the exploit but I don't understand whether apt using https would prevent it or not. The author says:

    > Yes, a malicious mirror could still exploit a bug like this, even with https.

    and:

    > I wouldn’t have been able to exploit the Dockerfile at the top of this post if the default package servers had been using https.

    So which is it?

    • raesene9 5 years ago

      Essentially adding HTTPS would make the attack harder to exploit. It's not that HTTPS is a pancea (it's not) but that it raises the bar to a successful attack.

      With HTTP, this can be exploited by anyone who can MITM a connection between you and the APT server or has control of your DNS.

      If you consider all the cases like wi-fi hotspots, that's (potentially) quite a large set of attackers, and a relatively easy attack to pull off in a lot of cases.

      With HTTPS, the attacker has either to compromise the whole APT mirror or has to get a valid HTTPS certificate for an APT mirror. This is likely harder to pull off, especially when you look at the work on improving CA security that the browser vendors have been doing over the last couple of years.

      • gnulinux 5 years ago

        We're talking about a million dollar software designed for governments and is sold only to highest bidders. I refuse to believe using HTTPS would be helpful here. This attack uses state-of-art to exploit HTTP and there is no reason to assume it wouldn't use state-of-art if it were HTTPS.

    • hannob 5 years ago

      HTTPS: A malicious mirror operator can pwn you.

      HTTP: Everyone can pwn you.

      Not saying the first one is ideal, but the second one is definitely worse.

      • trulyrandom 5 years ago

        With HTTP an attacker still has to MITM the connection between you and the mirror operator. So, definitely not "everyone".

        • cryptonector 5 years ago

          That includes: coffee shops, ISPs, employers, everyone who can hack their routers, anyone who can spoof DNS, etc. That might as well be "everyone".

          STOP IT. Though shall use HTTPS.

          • trulyrandom 5 years ago

            Fair enough. I agree that HTTPS is valuable here. I was just being overly pedantic, my bad.

        • DoctorOetker 5 years ago

          the moment we are talking about monitoring user's software base, we are practically already talking attackers at the skill level of nation states, so yeah "everyone" in the subset of plausible attackers.

    • detaro 5 years ago

      Both. Without HTTPS, you can execute the attack if you can MITM the connection to the package repository. If HTTPS is used, you need to be the package repository to do the attack, or need a certificate to MITM the connection so you can pretend to be it.

      • gnulinux 5 years ago

        With HTTPS a MitM attacker can still refuse to serve a specific package.

    • shinigami 5 years ago

      https prevents an attacker which can compromise the network. It does not prevent an attacker who can compromise the mirror. The author can't compromise an existing mirror, so they wouldn't able to exploit it (through the network) if the servers were using https.

  • _wmd 5 years ago

    Every time this site comes up people entirely miss the point in this regard -- Debian operates a large voluntary network of mirrors. You are not trusting content coming from Debian, you're trusting it coming from the mirror. SSL only secures the link between the client and the potentially compromised mirror, it does not solve problems like the one from the article.

    Meanwhile it's worth pointing out that OpenSSL has historically been one of the buggiest pieces of code in existence. Despite this being a game over RCE, it's the first of its kind in many years. If OpenSSL had been in the mix, Apt would have required forced upgrades /far/ more often. https://www.openssl.org/news/vulnerabilities.html

    • raesene9 5 years ago

      I don't think that an argument that using HTTPS actively decreases the security of a connection really holds all that well.

      If you don't think OpenSSL is a high enough quality implementation, there are many others to choose from.

      Even with a range of mirrors, it would still raise the bar for attackers, to require HTTPS.

  • jamp897 5 years ago

    He never explains why he wants to use HTTP, it’s only about why he thinks HTTPS isn’t nesscary.

    • AndyKelley 5 years ago

      HTTP is the null hypothesis, since it's simpler. Usually there is a great reason to reject this null hypothesis - it prevents security vulnerabilities. But if there is no added value, then there is no reason to do it.

      Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?

      • greglindahl 5 years ago

        It's worth noting that a large number of people don't agree with you that HTTP is the null hypothesis. Instead, they think that HTTPS is a security/privacy best practice and a great part of defense in depth.

        You can see this pro-HTTPS opinion all over this discussion.

        As for your "consider", I personally do double-wrap many streams: I have a VPN for my browser. The VPN is great for hiding my home traffic from being spied on by my ISP. Without the VPN, HTTPS streams would reveal hostnames (SNI) and IP addresses to my ISP.

      • yjftsjthsd-h 5 years ago

        > Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?

        If it's the exact same implementation then that doesn't really add a second layer. If, however, I am provided the option to run HTTPS over a VPN tunnel, then I would happily do that in a heartbeat. In fact, I frequently do run my web traffic over a proxy, thereby giving it at least two layers of encryption.

      • jamp897 5 years ago

        Yet it’s actually not simpler for the user, since their transfer can then be tampered with either by accident or intentionally leaving the user with a broken download and then what do they do? A redownlaod from a different mirror makes no difference.

        • AndyKelley 5 years ago

          The situation you described is the same thing that happens with a MITM attack with HTTPS. You would get a failed download from any mirror.

          Do you have a response to my question? "Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?"

          • jamp897 5 years ago

            It’s not the same, Comcast and other ISPs don’t tamper with HTTPS, and if they break the HTTPS connection then it’s a clearer problem for the ISP to troubleshoot than corruption.

            Sorry I don’t understand what double wrapping has to do with it, or why you’d ever do that.

          • nneonneo 5 years ago

            Because that just makes things slower for no good reason?

            • loeg 5 years ago

              Sounds like an argument for rejecting HTTP+TLS single-wrap too. (For apt — not in general.)

              • nneonneo 5 years ago

                I was being glib because I didn’t think I needed to explain fully, but here we go.

                Double-encrypting something with the same technique is pretty much always a sign of cargo cult crypto. Modern ciphers, like those used by TLS, are strong enough that there’s no reasonable way to break them applied once, and the downside is that applying them twice is making things slower than they need to be for zero added benefit.

                On the other hand, TLS and PGP are very different things serving very different purposes, so nesting those makes sense. There is an added benefit from TLS, namely that you ensure that everything is protected in transit - including the HTTP protocol itself (which is currently not protected and which might be subject to manipulation as shown in this post). Plus, it provides some resistance to eavesdropping (and with eSNI + mirrors hosted on shared hosts, that resistance should improve further).

    • LoSboccacc 5 years ago

      also, the apt way to fix this would be to a) move release.gpg out of the package path and b) require the release.gpg to be wrapped and signed with the previous valid key instead of being accepted blindly

    • guest2143 5 years ago

      Some country's firewalls distrupt https, which makes downloading things via https difficult.

      • tinus_hn 5 years ago

        So you create an non default http mirror for that minority, instead of making the majority insecure.

      • DoctorOetker 5 years ago

        so if north korea is subjugating some poor souls over there, the whole world must suffer along? there could be a setting with a big warning to disable the default HTTPS behaviour...

      • jamp897 5 years ago

        Which countries? I’ve only seen HTTP connections tampered with in practice, and China’s GFW blocks HTTP no different than HTTPS from what I’ve seen.

      • eeZah7Ux 5 years ago

        Also some companies, to allow IDS to inspect traffic without having to extract keys from clients.

  • lvs 5 years ago

    They (ubuntu, at least) have always been overtly hostile about this issue, which seems strange. I recall seeing open issues that just devolved into argument for many years on this. I don't get it.

  • squirrelicus 5 years ago

    I was just coming here to post that

  • loeg 5 years ago

    You know that HTTPS has the Location header too, right? This attack works just as well against HTTPS clients.

peterwwillis 5 years ago

Am I crazy, or is the bigger problem here not the fact that Apt will just install whatever random package the server provides, whether your system trusts its GPG key or not? What the hell is the point of the keys if the packages are installed anyway??

  • thriqon 5 years ago

    Usually, the packages themselves are not signed with GPG, only the Release file is (containing the hashes of all .deb files). This is actually the default of both Debian and Ubuntu. I never quite understood the reasons behind it... I'd not expect this vuln to happen, though.

    More info: https://blog.packagecloud.io/eng/2014/10/28/howto-gpg-sign-v...

    • peterwwillis 5 years ago

      I kind of get why they did it that way (my guess: managing lots of dev keys was problematic, so they used one key to sign a list of "official-seeming" files). But then why wasn't the Release file's contents verified (assuming this doesn't involve generating collisions for those packages)?

      "The parent process will trust the hashes returned in the injected 201 URI Done response, and compare them with the values from the signed package manifest. Since the attacker controls the reported hashes, they can use this vulnerability to convincingly forge any package."

      Wtf? This sounds like Apt is just downloading a gpg file and checking if it matches a hash in an HTTP header, and if it does, it just uses whatever is specified, regardless of whether your system already had the right key imported? This makes no sense. Any mirror could return whatever headers it wanted.

      This is the real vuln, not header injection. If Apt isn't verifying packages against the keys I had before I started running Apt, there was never any security to begin with. An attacker on a mirror could just provide their own gpg key and Releases file and install arbitrary evil packages.

      Can somebody who knows C++ please verify that their fixes actually stop installing packages if the GPG key wasn't already imported into the system? https://github.com/Debian/apt/commit/690bc2923814b3620ace1ff...

      • justicz 5 years ago

        The hash is done locally in the http worker process. I think you may be confusing headers in the HTTP response with headers in the internal protocol used to communicate with the worker process. The 201 response is not an HTTP response.

kuhhk 5 years ago

> Thank you to the apt maintainers for patching this vulnerability quickly, and to the Debian security team for coordinating the disclosure. This bug has been assigned CVE-2019-3462.

From the article. Good stuff

kanox 5 years ago

I was expect a buffer overflow but this is a quoting issue which is equally applicable to all languages. Are the any language-level methods which make such bugs impossible (or much harder)?

Heavy-weight strongly-typed HTTP libraries can force you to always construct headers in a way that handles quoting for you but people seem to love "light" solutions.

  • aasasd 5 years ago

    The method is to treat the protocol as structured data instead of a bunch of concatenated text. I.e. use a library that processes each piece of data, escaping any invalid characters, before putting the pieces together.

    Strong typing isn't relevant here, this is applicable in any language. But the lib needs to know when you're putting text in a header name and when in the value.

  • devit 5 years ago

    Using a binary protocol library for RPC (e.g. gRPC, raw protobufs, etc.) rather than writing your own text-based protocol incorrectly.

kpcyrd 5 years ago

It seems debian testing and unstable are still vulnerable:

https://security-tracker.debian.org/tracker/CVE-2019-3462

  • ChrisSD 5 years ago

    Yep, this is our daily reminder that they're called "testing" and "unstable" for a reason. They're not meant for production.

    • gtirloni 5 years ago

      New fixes often land in unstable/testing before going to stable branches.

      • cwyers 5 years ago

        I would hope so! Why have a testing branch if you're not going to test things in it?

      • debiandev 5 years ago

        True. But also new vulnerabilities land into unstable when upstream software releases are made. :)

        That's why the distribution is baked for months before being called stable.

explainplease 5 years ago

    $ sudo sed -i 's/http:/https:/g' /etc/apt/sources.list
    $ sudo apt-get update
    ...
    Err https://us.archive.ubuntu.com trusty/main Sources                          
    Failed to connect to us.archive.ubuntu.com port 443: Connection refused
    ...
    Err https://security.ubuntu.com trusty-security/main amd64 Packages
    Failed to connect to security.ubuntu.com port 443: Connection refused
  
???

    $ curl -v https://security.ubuntu.com/ubuntu/
    * Hostname was NOT found in DNS cache
    *   Trying 91.189.88.149...
    * connect to 91.189.88.149 port 443 failed: Connection refused
    *   Trying 91.189.88.152...
    * connect to 91.189.88.152 port 443 failed: Connection refused
    *   Trying 91.189.88.161...
    * connect to 91.189.88.161 port 443 failed: Connection refused
    *   Trying 91.189.88.162...
    * connect to 91.189.88.162 port 443 failed: Connection refused
    *   Trying 91.189.91.23...
    * connect to 91.189.91.23 port 443 failed: Connection refused
    *   Trying 91.189.91.26...
    * connect to 91.189.91.26 port 443 failed: Connection refused
    * Failed to connect to security.ubuntu.com port 443: Connection refused
    * Closing connection 0
    curl: (7) Failed to connect to security.ubuntu.com port 443: Connection refused
So even security.ubuntu.com is unavailable over HTTPS? Am I missing something?
  • deathanatos 5 years ago

    As was discussed recently on HN (and linked to elsewhere in the comments for this article), packages are signed, and APT checks those signatures; however, APT does download both the packages and the signatures in the clear. So, normally, the signatures get checked, which ensures that you get the package you intended. This is fine, mostly. (If you don't care about privacy, but it does prevent tampering, normally.)

    APT does not, however, give privacy, which HTTPS/TLS would. (Those in favor argue that TLS doesn't help here, as you can still see that you're connecting to Ubuntu, so it's still obvious that you're downloading updates. I personally disagree w/ this stance: I think there is value in protecting which packages you're pulling updates for, as what packages you have installed can inform someone about what you're doing. I think there's further argument that the sizes of the responses give away which updates you're pulling, but IDK, that seems harder to piece together, and TLS at least raises the bar for that sort of thing.)

    The bug discussed in the article circumvents the signature checking, by lying to the parent process about the validity of the signature by being able to essentially execute a sort of XSS/injection attack.

    • explainplease 5 years ago

      I think you misunderstood my comment. I'm aware of the apt security model and the nature of this bug.

      My point is that these Ubuntu repo servers are not available over HTTPS, which seems like a problem. In the context of this bug, a serious one--who's to say that there aren't more bugs like this lurking? There's no reason that these servers shouldn't be available over HTTPS.

ptx 5 years ago

So how do I safely update apt to the patched version on Debian Jessie?

It's apparently fixed in version 1.0.9.8.5: https://security-tracker.debian.org/tracker/CVE-2019-3462

...but the suggested apt -o Acquire::http::AllowRedirect=false update fails because security.debian.org wants to do a redirect.

Manually downloading the packages listed in the announcement doesn't work either, since that's the Stretch version.

I can get the source package here: https://packages.debian.org/jessie/apt

...but the key is not in /usr/share/keyrings/debian-archive-keyring.gpg. (And I'm not entirely sure how to build a source package.)

I tried adding a different source, as suggested in the announcement:

  deb http://cdn-fastly.deb.debian.org/debian-security stable/updates main
...but it seems it can't find the right versions of all the dependencies:

  # apt -o Acquire::http::AllowRedirect=false install apt apt-utils libapt-pkg5.0 libapt-inst2.0 liblz4-1
  ...
  The following packages have unmet dependencies:
   libapt-pkg5.0 : Depends: liblz4-1 (>= 0.0~r127) but 0.0~r122-2 is to be installed
  E: Unable to correct problems, you have held broken packages.
  • jwilk 5 years ago

    No idea why would would security.debian.org make a redirect. Can you paste the whole error message?

    Regarding the other error, jessie ≠ stable. You want "jessie/updates", not "stable/updates".

    • ptx 5 years ago

      Ah, silly blind copy-pasting on my part. With "jessie/updates" it works. Thanks!

      The message I was referring to, which looks like it's indicating that security.debian.org is trying to redirect:

        # apt -o Acquire::http::AllowRedirect=false update
        ...
        Err http://security.debian.org jessie/updates/main Sources
          302  Found [IP: 217.196.149.233 80]
        Err http://security.debian.org jessie/updates/main amd64 Packages
          302  Found [IP: 217.196.149.233 80]
        Err http://security.debian.org jessie/updates/non-free amd64 Packages
          302  Found [IP: 217.196.149.233 80]
        Err http://security.debian.org jessie/updates/main i386 Packages
          302  Found [IP: 217.196.149.233 80]
        Err http://security.debian.org jessie/updates/non-free i386 Packages
          302  Found [IP: 217.196.149.233 80]
        Fetched 422 kB in 2s (169 kB/s)
        W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/source/Sources  302  Found [IP: 217.196.149.233 80]
      
        W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/binary-amd64/Packages  302  Found [IP: 217.196.149.233 80]
      
        W: Failed to fetch http://security.debian.org/dists/jessie/updates/non-free/binary-amd64/Packages  302  Found [IP: 217.196.149.233 80]
      
        W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/binary-i386/Packages  302  Found [IP: 217.196.149.233 80]
      
        W: Failed to fetch http://security.debian.org/dists/jessie/updates/non-free/binary-i386/Packages  302  Found [IP: 217.196.149.233 80]
      
        E: Some index files failed to download. They have been ignored, or old ones used instead.
nisa 5 years ago

FWIW you can have https today just install apt-transport-https and use this sources.list:

    deb https://deb.debian.org/debian stable main contrib non-free
    deb https://deb.debian.org/debian-security stable/updates main contrib non-free
    deb https://deb.debian.org/debian stretch-backports main contrib non-free
jamieweb 5 years ago

All my servers do an update and dist-upgrade every 24 hours, and it emails me the log. I saw this post just a few minutes after checking the log for today.

I imagine that this is a higher risk for virtualized servers in a public cloud. I use Linode, so somebody else could have set up a Linode to MITM everybody and serve the exploit. If it were a private home or corporate network, somebody would either have to be on your network, or on a piece of major infrastructure between you and the mirrors.

Is there a way to tell from the apt log whether I am affected? It looks like you can see it trying to install an extra dependency package. Anyway, the logs are not immutable or verifiable, so if somebody got root they could theoretically kill apt, write a fake log in its place and then email that to me...

I took full images of all my servers a few days ago, so at least I have those should I need them.

  • perennate 5 years ago

    > I imagine that this is a higher risk for virtualized servers in a public cloud.

    I think it might be the other way around (at least in terms of virtualized servers versus physical servers, both on public cloud) -- it is easier to implement IP address and other filtering measures with virtualized servers than inside physical network switches. Linode and other virtual machine providers almost universally implement this filtering, but many dedicated server providers are not as robust.

    • jamieweb 5 years ago

      That's a good point actually - although when using dedicated hardware I usually have in my mind that everything is raw rather than abstracted by a hypervisor, so this sort of thing should be more expected.

      With a public cloud you don't really know how it's set up on their end, as there are countless different ways to do it.

aboutruby 5 years ago

> a malicious mirror could still exploit a bug like this, even with https. But I suspect that a network adversary serving an exploit is far more likely than deb.debian.org serving one or their TLS certificate getting compromised

Exactly, this is far more easily exploitable because apt is using HTTP by default instead of HTTPS

Kliment 5 years ago

If this were one of those bugs that do the full PR bullshit run and use a catchy name and a landing page, I'd propose it be called "Inapt"

aasasd 5 years ago

I'm just wondering if the author should've given people more time to pull the updated apt, before publishing the issue. This is only a few days old, right?

I'm not familiar with established procedures in such cases, and am curious about the rationale for omitting a time window for the update.

  • pfg 5 years ago

    Public disclosure once patches are available is a fairly common policy. Google's Project Zero operates like that as well.

systematical 5 years ago

So will someone more in the know than myself tell me if using apt-transport-https is a reasonable solution to this problem, or, at least mitigates the problem?

As someone who uses Linux as their personal O.S. and administers some at work, but doesn't think in bash, I'd really like an answer :-)

  • moosingin3space 5 years ago

    It makes it so that your mirror would have to be exploiting apt, instead of effectively anyone. As a result, using TLS for downloads would mitigate this (but not fix it).

    I've read the arguments against HTTPS for apt many times. They're wrong.

  • kelnos 5 years ago

    It will mitigate the possibility of someone MitM'ing your connection to the package mirror. It won't protect you against a malicious package mirror.

ChrisCinelli 5 years ago

Incidentally binaries from support.apple.com are also served on http.

  • aboutruby 5 years ago

    The downloads are in an HTTPS page leading to HTTPS download links, and HTTP redirects to HTTPS: https://support.apple.com/downloads/quicktime for instance.

    • ptx 5 years ago

      > and HTTP redirects to HTTPS

      ...or to anywhere a MITM attacker wants to redirect you.

      • ptx 5 years ago

        I guess some people disagree?

        But if any response in the redirect chain is served over HTTP, it can be replaced with a different response containing any "Location" header of the attacker's choosing instead of the original one. So it doesn't matter if the eventual intended URL is an HTTPS URL, because it will never be reached. The redirect will go to the attacker's site instead. (And in the case of a download, the user will never notice because the file URL is usually not prominently displayed.)

        So a HTTP response anywhere in a redirect chain is equivalent to serving it over HTTP. Perhaps this was exactly the point of the parent post, but I thought it would be useful to make it explicit.

feikname 5 years ago

Seems like the discovery of this vuln was a direct result of yesterday's discussion about HTTPS on apt here on HN (https://news.ycombinator.com/item?id=18958679).

  • justicz 5 years ago

    Nope, I reported this bug several days ago. Funny coincidence though.

  • dreta 5 years ago

    No. That would mean less then a day heads-up from the researcher.

    • loeg 5 years ago

      It would be a really impressive turn around on the fix from Debian, though ;-).