• 1 Post
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • Yes, I do loose the origin IP and I’m a little bugged by it. It also means that ALL traffic incoming on a specific port of that VPS can only go to exactly ONE private wireguard peer. You could avoid both of these issues by having the reverse proxy on the VPS (which is why cloudflare works the way it does), but I prefer my https endpoint to be on my own trusted hardware. That’s totally my personal preference though.

    I trust my VPS provider to not be interested enough in my data to setup special surveillance tooling for each and every possible software combination their customers might have. Cloudflare on the other hand only has their own software stack to monitor and all customers must adhere to it. It’s by design much easier for them to do statistics or snooping.




  • maiskanzler@feddit.detoSelfhosted@lemmy.worldCloudflare is bad. Youre right.
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    4 months ago

    Sure it’s easy to set up, but the same behaviour is what I get with my handrolled solution. I rent a cheap VPS with a fixed IP solely for forwarding all traffic through wireguard. My DNS entries all point to the VPS and my servers connect to the VPS to be reachable. It is absolutely network agnostic and does not require any port shenanigans on the local network nor does it require a fixed IP for the internet connection of my home server.

    Data security wise the HTTPS terminates on my own hardware (homeserver with reverse proxy) and the wireguard connection is additionally encrypted. There are no secrets or certificates on the rented VPS beyond the bare minimum for the wireguard tunnel and my public key for SSH access.

    Shuttling the packets on the VPS (inet to wireguard) is done by socat because I haven’t had the will or need to get in the weeds with nftables/iptables. I am just happy that it works reliably and am happy to loose some potential bandwidth to the kernelspace/userspace hoops.













  • I am very happy with mine and have only ever had one hiccup during updating that was due to my Dockerfile removing one dependency to many. I’ve run it bare metal (apache, mariadb) as well as containerized (derived custom image, traefik, mariadb). Both were okay in speed after applying all steps from the documentation.

    Having the database on your fastest drive is definitely very important. Whenever I look at htop while making big copies or moves, it’s always mariadb that’s shuffling stuff around.

    In my opinion there are 2 things that make nextcloud (appear) slow:

    1. Managing the ton of metadata in the db that is used by nextcloud to provide the enhanced functionality

    2. It is/was a webpage rendered mostly on the server.

    The first issue is hard to tackle, because it is intrinsic and also has different optimums for different deployment scales. Optimizing databases is beyond my skillset and therefore I stick to the recommendations.

    The second issue is slowly being worked around, because many applications on nextcloud now resemble SPAs, that are highly interactive and are rendered by your browser. That reduces page reloads and makes it feel more smooth.

    All that said, I barely use the webinterface, because I rarely use the collaboration features. If I have to create a share I usually do that on the app because that’s where I send the link to people. Most of my usecase is just syncing files, calendars and contacts.



  • I did too, but shortly after decommissioning that server the drive became unresponsive. I really dodged a bullet without even realizing at the time. SMART data did not work and may have alerted me in that case.

    Also, unrelated to SMART data, the server failed to do reboots because the USB-SATA adapter did not properly reset without a full power cycle (which did not happen with that mainboard’s USB on reboots). It always git stuck searching for the drive. Restarting the server therefore meant shutting it down and calling someone to push the button for me - or use Wake-On-LAN which thankfully worked but was still a dodgy workaround.