• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: August 27th, 2023

help-circle
  • So many suggestions here but I thought I’d chime in because I have a setup very similar to what you suggested and I found a very easy way of hosting it securely. I am using Unraid on a system in my house. I have my web service running in a docker container. I exposed it using a cloudflare tunnel. There is an Unraid plugin for cloudflare tunnels that takes out a lot of the configuration work involved in getting it running locally. You just have to also set up a corresponding endpoint on Cloudflare’s website and have a domain name registered with them for you to link to it.

    The way it works then is when someone requests your domain (or subdomain) in their browser, Cloudflare gets the request and redirects the traffic to the cloudflare tunnel client app that you set up in your computer. That app on your machine then redirects the traffic to your other container that is hosting your web service and established bidirectional communication that way.

    The benefits to this system are:

    • Relatively easy setup, especially if you want to expose more services in the future (you’ll need to run a separate cloudflare container for each service exposed though)
    • No need to open ports in your router or firewall on your home network. Cloudflare just knows how to communicate between its server and its client app on your computer (I think you have to set up an access token so it is secure).
    • None of your users ever learn your home IP address because once they connect at Cloudflare’s server, they don’t get any more knowledge than that about what’s on the other side.
    • It’s free (not including the cost of registering your domain)
    • You don’t have to worry about changing anything if your ISP randomly changes your IP address. Hell, you could even move to a new house and take your computer with you and you wouldn’t have to reconfigure anything.

    Downsides:

    • You have to trust that Cloudflare is not scraping all the traffic going through the tunnel.
    • Some people have a moral issue with giving Cloudflare more responsibility for hosting “the Internet”. We already rely on their infrastructure heavily for large sections of the Internet. If they ever become malicious or compromised, there is a lot to lose as a society.

    I believe you can use Wireguard and a rented VPS to recreate this setup without Cloudflare but it will require a lot more knowledge in order to set it up with more points of failure. And it would cost more because even though Wireguard is FOSS, a VPS will cost you a monthly fee of at least a few bucks per month.

    I currently have 2 services exposed using Cloudflare tunnels on my Unraid system at home. They’ve been running for over a year now with 0 interruption.









  • Agreed. The fact that they ask while giving no information instead of just buying it makes me feel like they can’t comfortably afford it without sacrificing something else. If they really don’t have very strong opinions about it, they’re probably better off saving their money.



  • The backup and easy set up on other servers is not necessarily super useful for a homelab but a huge selling point for the enterprise level. You can make a VM template of your host with docker set up in it, with your Compose definitions but no actual data. Then spin up as many of those as you want and they’ll just download what they need to run the images. Copying VMs with all the images in them takes much longer.

    And regarding the memory footprint, you can get that even lower using podman because it’s daemonless. But it is a little more work to set things up to auto start because you have to manually put it into systemd. But still a great option and it also works in Windows and is able to parse Compose configs too. Just running Docker Desktop in windows takes up like 1.5GB of memory for me. But I still prefer it because it has some convenient features.





  • Yes that’s correct. And containers are not really portable in the way you described. They do have a mini OS in them but the state is not saved when they are “offline”. So you can think of it as more of a template, called an image. You can save the image to a file, and move that to a new pc and load that back into Docker, but that’s usually unnecessary. As long as you have Internet, you just need to know the name of the image and Docker will just download it if it doesn’t already have it. For most popular programs, you’ll find an image for it already created so just follow the instructions on what settings to use for things like volume mounts and environment variables. This configuration of variables can be saved into a Docker Compose file for easy reuse instead of typing really long command line to run your container. This Compose file is all you really need to move to the other PC and it’ll just download your image and run everything as before.


  • The data that your software interacts with is external to a container but can be mapped to the container file system so that the software can interact with it. For example, your Minecraft server relies on disk files consisting of the world state and server config files (and plugins if you have any). These will still be on your host system, not inside the container. In order to transfer this to another system, you have to take those files from your host and move them to the new host. Then copy your docker config files over too so you can start a new container that points to your files on disk and it should function the same as it did before, but possibly with a different IP address.

    An easy way to think of it is that a container can be restarted, like maybe when you restart the PC. When a container is restarted, all the files in the container reset to their initial state. If you go into a container and create files then restart the container, those files will be gone. Any data that needs to be persisted between restarts needs to go on your host filesystem and volume mount it to the container.




  • Realistically, I only see 3 risks using Keeper: my device has malware which lets them grab my passwords from my clipboard as I copy them, malware that lets them take control of my device after I’ve unlocked my password manager, or if the cloud storage is completely wiped out in some freak accident.

    1 and 2 are risks for anyone using any password manager. And 3 is extremely unlikely since they use AWS for storage wirh multi-zone and multi-region redundancy, and certainly much more reliable than self hosting.

    The risk of actually having your passwords cracked, even if the cloud data is leaked, is practically 0 as long as you have a decent complexity and length master password and 2FA enabled. And the risk is just as low with a MITM attack or other network based interceptors because of the ZK architecture (as you mentioned) and high encryption used.

    Anyone promoting other password managers as more secure either aren’t considering the risks to data loss due to self hosting or are buying too much into their password manager’s marketing. I think it’s totally reasonable to prefer other options due to feature support or subscription price though. A couple of features that Keeper had that made me choose it were:

    • Ability to create Records which allows me to store anything including files. This allows me to upload sensitive records like tax returns or other documents you’d traditionally keep in a safe or filing cabinet.
    • Family plan that makes it easy for me to share passwords with people on my plan (great for things like streaming services). This brought the price to a reasonable level.

    There might be other password managers now that support these features, as I haven’t kept up with them. I subscribed to Keeper about 6 years ago and haven’t had a reason to switch. I’m open to suggestions if people know of other managers with better features.