Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 110 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Mostly because you need to be able to resolve the TLD. The root DNS servers need to know about every TLD and it would quickly be a nightmare if they had to store hundreds of thousands records vs the handful of TLDs we have now. The root servers are hardcoded, they can’t easily be scaled or moved or anything. Their job is solely to tell you where .com is, .net is, etc. You’re supposed to query those once and then you hold to your cached reply for like 2+ days. Those servers have to serve the entire world, so you want as few queries to those as possible.

    Hosting a TLD is a huge commitment and so requires a lot of capital and a proper legal company to contractually commit to its maintenance and compliance with regulations. Those get a ton of traffic, and users getting their own TLDs would shift the sum of all gTLD traffic to the root servers which would be way too much.

    With the gTLDs and ccTLDs we have at least there’s a decent amount of decentralization going, so .ca is managed by Canada for example, and only Canada has jurisdiction on that domain, just like only China can take away your .cn. If everyone got TLDs the namespace would be full already, all the good names would be squatted and waiting to sell it for as much as possible like already happens with the .com and .net TLDs.

    There’s been attempts at a replacement but so far they’ve all been crypto scams and the dotcom bubble all over again speculating on the cool names to sell to the highest bidder.

    That said if you run your own DNS server and configure your devices to use it, you can use any domain as you want. The problem is gonna get the public Internet at large to recognize it as real.



  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.





  • Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

    The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

    In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

    The packages in the Arch repo are legit saner than the Docker version.


  • I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

    I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

    4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

    Backblaze has pretty good drive reviews as well, with real world failure rate data and all.





  • I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

    That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.



  • Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

    I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

    LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

    I guess one more to the pile of why everyone hates Zscaler.



  • IMO the biggest attack vector there would be a Minecraft exploit like log4j, so the most important part to me would make sure the game server is properly sandboxed just in case. Start from a point of view of, the attacker breached Minecraft and has shell access to that user. What can they do from there? Ideally, nothing useful other than maybe running a crypto miner. Don’t reuse passwords obviously.

    With systemd, I’d use the various Protect* directives like ProtectHome, ProtectSystem=full, or failing that, a container (Docker, Podman, LXC, manually, there’s options). Just a bare Alpine container with Java would be pretty ideal, as you can’t exploit sudo or some other SUID binaries if they don’t exist in the first place.

    That said the WireGuard solution is ideal because it limits potential attackers to people you handed a key, so at least you’d know who breached you.

    I’ve fogotten Minecraft servers online and really nothing happened whatsoever.


  • If your stuff is all Docker then yeah, immutable makes sense as it makes the entire box declarative and immutable: you can get back the exact same operating Docker environment on the server, and then you can get back the exact same Docker workloads going with the Docker compose configurations.

    If you ever need to run stuff you’d run on Debian, you can just shove it in a Debian container.

    That said, if most of the stuff is containers, the risk of just the core Debian breaking is fairly low. Pick whatever is easiest for you to deal with based on your needs. Immutable distros have a bit of a learning curve.


  • I learned it accidentally trying to get root on an encrypted dataset working with systemd init without sd-zfs. This turns out to be how the zfs utility works internally to signal the driver “hey it’s okay, I’m a ZFS utility the user isn’t using mount directly”, and how you deal with mounting your root dataset to the temporary /sysroot while having its mountpoint set to / while in initramfs before pivoting root.

    Obviously, don’t use that other than recovering your data, if you want to use this array you should figure out the mountpoints properly so ZFS does it automatically. It shouldn’t break anything but it’s gross, either set mointpoint=legacy and use fstab or set its mountpoint in ZFS and use zfs mount.


  • The trick for this one is mount -t zfs -o zfsutil internal /mnt/some/path

    Assuming the root dataset is mountable. If you have a -o canmount=off on the dataset it will refuse to mount.

    If it’s -o mountpoint=legacy then you don’t need -o zfsutil, but still need to provide both the source and destination paths. Otherwise you’ll get the fstab error because mount can’t figure out what to mount or where to mount it.


  • Yep there’s a reason I reached directly for that configuration. WireGuard uses UDP, that’s one of the first things that gets blocked.

    Turns out that’s also the kind of protocol corporate VPNs use, reusing port 443 over TCP. They call those “SSL VPN”. They get to weed out all commercial VPNs used to bypass their firewalls as well as most torrent/game activity while still mostly catering to their business guests.