salamander014 15 days ago

Hey cool project! I had the same need, and solved it a very different way.

I set up a wireguard server on a publicly accessible VPS.

The neat part about using "lscr.io/linuxserver/wireguard:latest"

is that it allows my to codify the number of clients I need. This includes both endpoints and source devices.

The second thing I did, was separate out the "networking" bits from the "userspace" bits, meaning it doesn't matter what port the service is running on. The client can hit it.

Taking that one step further, I just combined the above with haproxy and set my application ports there. This means I can hit haproxy on "someport" inside the VPN and it'll forward to whatever service I've got configured on that "client" that haproxy can see on it's LAN.

Works great, currently running a simple web page off the whole thing, where you connect to VPS and it tunnels the actual HTTP connection into kubernetes in my house.

I was thinking about writing this all up one day, but there's some cleanup to be done. Oh well.

  • aborsy 15 days ago

    VPN traffic is decrypted at VPS. TLS encryption may also be terminated by the reverse proxy.

    A mesh VPN will give you point to point tunnel. Even http will be secure.

  • 0xdade 15 days ago

    Sounds pretty cool, I have done some similar things in the past with using a vpn to proxy backwards into my home network (hello fellow k8s at home user). I think in this case I wanted to basically set up my one nginx config and never have to change the web server config again and support arbitrary services in the future. I've never used haproxy before, but I wonder if there could be some room for improvement (read: not using unix domain sockets) by using a web server that can dynamically detect upstreams in a particular set of ports. E.g. if all my "tunnel" ports are on localhost:8000-9000, it can dynamically pick them up. I guess I still wouldn't know how to answer the "pick a name for the tunnel at runtime" problem, but it's definitely something worth exploring further!

    If I was doing something that I intended to have running more than an hour or two at a time, I would 100% do something more like what you're describing haha.

kofoednielsen 13 days ago

Hello! I've built a very similar project using Wireguard, Caddy and <300 lines of python with self-hosting as a prime goal.

Cool part is that you don't even need to install a client. If you have Wireguard and curl you can simply run this one-liner:

  curl https://tunnel.pyjam.as/8080 > tunnel.conf && wg-quick up ./tunnel.conf
  > You can now access http://0.0.0.0:8080 on https://<unique_slug>.tunnel.pyjam.as/
We also use wildcard certificates to avoid leaking the randomly generated subdomains.

Check it out if you're interested.

Code: https://gitlab.com/pyjam.as/tunnel

Public instance: https://tunnel.pyjam.as

mcint 15 days ago

For automatic cleanup, you can try bash's trap command. Set up a cleanup trap when you create the file.

In .ssh/rc:

  if [ -n "$DOMAIN" ]; then
    # ... create socket & report to user

    clean_socket(){ rm /tmp/test.dev.tld.socket; }
    trap clean_socket EXIT INT HUP;

  fi
  • 0xdade 15 days ago

    Oooh, I hadn't considered HUP. I tried to use a cleanup script with a bash trap on I think INT and KILL but it didn't seem to work correctly. I had also never tried to use a trap command, though, so there was a good chance I was doing it wrong lol. I'll give this a shot!

  • 0xdade 15 days ago

    Ohh so I just gave this a shot and I think that the trap runs when `.ssh/rc` exits, which is immediately when my bash prompt shows up. But if I want to make it non-interactive (in a really hacky way) then I can have my .ssh/rc file just infinitely sleep if domain is defined. Then I killed the ssh connection via a `kill` command on the client side and it appropriately cleaned up the socket file in tmp.

    I combined this infinite loop in ssh rc and a -T and a simple command "echo hello" in my client function and now it prints out the link to visit, hangs infinitely until I close it or it gets closed, and cleans itself up.

    This just took the level of hackishness to new heights and I love it.

    • cbluth 15 days ago

      Using a trap didn't work?

0xdade 16 days ago

I hacked together nginx, ssh, and a little bit of bash to make a simple dev tunnel service on my own domain. I thought HN readers could appreciate (and probably roast) it.

  • mrAssHat 15 days ago

    Hacking all those things together feels empowering, like a complex construct that can be built from simple things we are already used to. This article has a very "hacky" spirit, love it!

  • 1oooqooq 15 days ago

    ... and published dns records, and scripts on the client, etc etc etc.

    it's functional. but far from practical or elegant.

    good write up tho

knagy 15 days ago

That unix domain socket solution sounds really nice. I wonder if it would be possible to send something naughty in the host header (like something with ../../.. in it) to misuse this or nginx does some validation before it reaches the proxy_pass...

I also tried to hack together my own solution [0] just for fun, but I didn't know about the unix socket part, so at the end I went with traefik and redis. :)

[0] https://deadlime.hu/en/2023/10/29/light-at-the-end-of-the-tu...

  • 0xdade 14 days ago

    I updated the post late last night to address the security bits of the host header. Based on my understanding of nginx documentation and some brief testing, I don't think path traversal in the host header is possible -- nginx throws a 400 instead of a 502, which indicates it isn't making it to the proxy_pass yet. I think the $host variable is basically guaranteed to at least match the server_name regex block by the time it reaches the proxy_pass -- so to further tighten it up, you could only allow alphanumeric characters in your server_name regex.

    I just checked out your solution and also learned a new trick about ssh! I didn't know that setting the port to 0 would cause dynamic allocation for the tunnel. It makes sense, I did know about that 0 behavior just in typical linux processes, but never thought to apply it to an ssh tunnel.

andydunstall 13 days ago

Very cool, I've been looking for an open source Ngrok alternative as well though for production traffic rather than development (I couldn't find a good option on awsome-tunneling so have been playing about with a proof of concept at https://github.com/andydunstall/pico)

westurner 15 days ago

awesome-tunneling lists a number of ngrok alternatives: https://github.com/anderspitman/awesome-tunneling

- https://news.ycombinator.com/item?id=39754786

- FWIU headscale works with the tailscale client and supports MagicDNS

  • 0xdade 15 days ago

    I link to awesome-tunneling in my post :) I didn't know about that particular list until after I spent my night doing this.

    I didn't know about headscale, that does seem pretty cool but I think MagicDNS also specifically would introduce a behavior that I didn't particularly want -- TLS certs being issued for my individual hosts, and thus showing up in cert transparency logs and getting scanned. Ultimately this is really only a problem in the first minutes or hours of setting up a cert, though.

    Honestly I would probably recommend every other solution before I recommend my own. It was just fun to figure out and it works surprisingly well for what I wanted -- short lived development tunnels on my own infra with my own domain, without leaking the address of the tunnel automatically.

Fire-Dragon-DoL 14 days ago

I'll throw this out. If you did this in one night, ngrok $5 plan is probably too high. I don't know what they offer though.

  • remram 13 days ago

    I don't know, you have to count the cost of the VPS running it. I suppose you could use some cloud's free tier maybe?

    • Fire-Dragon-DoL 13 days ago

      I agree you have to count that cost, my point was more like "if they are doing this to multiple people, it shouldn't cost the same as a VPS per person"