peterwwillis 5 years ago

I would argue we do need a real DNS replacement. The stupid 512 byte limit has been a scapegoat to prevent new important features from being implemented forever, and the stateless nature has been abused by bad actors. We want and need something like QUIC, and a more sane way to adopt changes to features.

The obsession with backwards compatibility is crazy. Imagine if we took real physical infrastructure in the world and insisted we continue to build it only in a way compatible with technology from the 1800's. We live in a modern world, where firmware upgrade doesn't require a UV light source, and where we can probably get two or three companies to push for the rapid adoption of new industry standard formats - just looked what happened to enable tech like Ethernet to become a defacto standard.

  • bluejekyll 5 years ago

    You might be interested in DNS-over-QUIC: https://datatracker.ietf.org/doc/draft-huitema-quic-dnsoquic...

    Also, the 512 byte limits hasn’t been an issue for many years, as EDNS allows for much larger packet sizes, generally up to 4K. (Edit: although, some DNS recursive resolvers have started limiting UDP connections to 512, and only allowing larger packets in TCP, to reduce the effect of reflection attacks)

    As to backward compatibility, don’t underestimate the number of old devices on the network that need DNS to operate and haven’t been, or won’t be updated for years.

    • peterwwillis 5 years ago

      The 512 byte limit is still a significant issue because literally nobody enforces the use of EDNS. Anyone can just put an arbitrary limit on udp packets on port 53, either at the service provider or at any intermediate network. Nobody is willing to say "you MUST use EDNS to use our service" (because they don't like rejecting cash-paying customers due to technical restrictions), so everyone just gives up and says, fine, I guess we just won't depend on EDNS, so it becomes an optional feature that you can't depend on.

      I would be very happy if a variant of QUIC became its own OSI layer 4 protocol so we could start re-engineering all higher level protocols to use QUIC rather than UDP or TCP. If we had done that decades ago, not only would a majority of our network security issues have been nonexistent, we would have had extra performance and bandwidth to advance the state of the art. (If you ask "why not just implement it as UDP now and be backwards compatible", it's because operating systems generally don't provide APIs for network protocols higher than layer 4, and we need the protocol to be natively supported by all TCP/IP stacks to grease wider network protocol adoption and reap the benefits for all network applications)

      • zamadatix 5 years ago

        > it's because operating systems generally don't provide APIs for network protocols higher than layer 4, and we need the protocol to be natively supported by all TCP/IP stacks to grease wider network protocol adoption and reap the benefits for all network applications)

        I think the exact opposite is true in reality, people want to nimbly implement protocols so badly they'd rather shove everything on top of UDP/TCP or even HTTP than deal with waiting around for whole operating systems to age out. From a pure technical design standpoint I agree a lot of things should be done at lower layers but I don't think it's possible for this to happen as quickly as a lot of people are wanting.

        Not to mention the second travesty of the layered protocol model in the real world: hardware and configurations have started to grow around it. A lot of networks can't handle QUIC simply because UDP 443 won't go through their firewall yet people are expecting everyone to just jump up and start NATing completely different L4 protocols? Remember IPv6 has been around since the late 90s yet people are still sticking to the same NAT solution.

        Became more rambly than I wanted but I'm just a network guy frustrated that we can't keep building up the protocol stack nor can we tear it down.

      • pixl97 5 years ago

        >I would be very happy if a variant of QUIC became its own OSI layer 4 protocol so we could start re-engineering all higher level protocols to use QUIC rather than UDP or TCP.

        So pretty much something that will never get implemented in middleware. About the only way for things to get is to tack them on to the HTTP TLS layer and have Google/Firefox for the update to everyone.

  • nouseforaname 5 years ago

    DNS was built in the 80s, there are still buildings from the 80s that are just fine.

    We're just really really bad at software engineering as a practice.

ahubert 5 years ago

author here - actual project is on https://powerdns.org/hello-dns/

  • pmoriarty 5 years ago

    Thank you for writing this.

    It looks like the content on that site is really just plain, static text. It would be nice if it could be read without requiring the user's browser to allow javascript.

    • rnotaro 5 years ago

      I thought it was a plain text document. My JSON reader browser extension was preventing MarkDeep to render for some unknown reasons.

  • Flowdalic 5 years ago

    First, let me say thank you for your efforts towards making DNS more accessible for implementors. Could you elaborate on "CNAME chasing to other zones: let’s not" and "Adding glue from other zones: let’s not"?

totalperspectiv 5 years ago

I clicked because of the camel. Super cool project! DNS has always been one of those black box layers to me. This looks like a great resource for learning just a little, or going all the way down the rabbit hole.

ca98am79 5 years ago

DNS needs to be decentralized, like on a blockchain. The internet won't really be free until this happens. Currently ICANN, the top-level domain registries, and even the registrars have too much control and can take down and censor domains.

  • qznc 5 years ago

    Which would be https://namecoin.org/

    I agree that DNS might be a better use case for blockchains than money.