isn’t it an annoyance having to connect to your home network all the time?
It’s less annoying than the gnawing fear that my network might be an easy target for attackers.
isn’t it an annoyance having to connect to your home network all the time?
It’s less annoying than the gnawing fear that my network might be an easy target for attackers.
Yes, OP I highly recommend a GL.iNet device. It’s pocket sized and always does the job.
It’s also great for shitty wifi that tries to limit how many devices you can connect. The router will appear as one MAC and then all your other devices can route traffic through it.
As someone who has owned enterprise servers for self-hosting, I agree with the previous comment that you should avoid owning one if you can. They might be cheap, but your longterm ownership costs are going to be higher. That’s because as the server breaks down, you’ll be competing with other people for a dwindling supply of compatible parts. Unlike consumer PCs, server hardware is incredibly vendor locked. Hell, my last Proliant would keep the fans ramped at 100% because I installed a HDD that the BIOS didn’t like. This was after I spent weeks tracking down a disk that would at least be recognized, and the only drives I could find were already heavily used.
My latest server is built with consumer parts fit into a 2U rack case, and I sleep so much easier knowing I can replace any of the parts myself with brand new alternatives.
Plus as others have said, a 1U can be really loud. I don’t care about the sound of my gaming computer, but that poweredge was so obnoxious that despite being in the basement, I had to smother it with blankets just so the fans didn’t annoy me when I was watching TV upstairs. I still have a 1U Dell Poweredge, but I specifically sought out the generation that still let you hack the fan speeds in IPMI. From all my research, no such hack exists for the Proliant line.
Assuming that the disk is of identical (or greater) capacity to the one being replaced, you can run btrfs replace
.
https://wiki.tnonline.net/w/Btrfs/Replacing_a_disk#Replacing_with_equal_sized_or_a_larger_disk
I’d recommend BTRFS in RAID1 over hardware or mdadm raid. You get FS snapshotting as a feature, which would be nice before running a system update.
For disk drives, I’d recommend new if you can afford them. You should look into shucking: It’s where you buy an external drive and then remove (shuck) the HDD from inside. You can get enterprise grade disks for cheaper than buying that same disk on its own. The website https://shucks.top tracks the price of various disk drives, letting you know when there are good deals.
Haven’t used it yet, but I’ve been researching authentik for my own SSO.
BTRFS should be stable in the case of power loss. That is to say, it ought to recover to a valid state. I believe the only unstable modes are RAID 5/6.
I’d recommend BTRFS in RAID1 mode over mdadm RAID1 + ext4. You get checksumming and scrubs to detect drive failures and data corruptions. You also have snapshotting, in case you’re prone to the occasional fat-fingered rm -rf
.
For backup, maybe a blu-ray drive? I think you would want something that can withstand the salty environment, and maybe resist water. Thing is, even with BDXL discs, you only get a capacity of 100GiB each, so that’s a lot of disks.
What about an offsite backup? Your media library could live ashore (in a server at a friend’s house). You issue commands from your boat to download media, and then sync those files to your boat when it’s done. If you really need to recover from the backup, have your friend clone a disk and mail it to you.
Do you even need a backup? Would data redundancy be enough? Sure if your boat catches fire and sinks, your movies are gone, but that’s probably the least of your problems. If you just want to make sure that the salt and water doesn’t destroy your data, how about:
This would probably be cheapest and have the least complexity.
I wouldn’t trust anything like that to the open internet. It would be better to access the system over a VPN when you’re outside the network.
Yeah, I believe there’s some kind of bridge mode you must enable on the host’s interface.
As others have said, a reverse proxy is what you need.
However I will also mention that another tool called macvlan exists, if you’re using containers like podman or docker. Setting up a macvlan network for your containers will trick your server into thinking that the ports exposed by your services belong to a different machine, thus letting them use the same ports at the same time. As far as your LAN is concerned, a container on a macvlan network has its own IP, independent of the host’s IP.
Macvlan is worth setting up if you plan to expose some of your services outside your local network, or if you want to run a service on a port that your host is already using (eg: you want a container to act as DNS on port 53, but systemd-resolved is already using it on the host).
You can set up port forwarding at your router to the containers that you want to publicly expose, and any other containers will be inaccessible. Meanwhile with just a reverse proxy, someone could try to send requests to any domain behind it, even if you don’t want to expose it.
My network is set up such that:
Did you mean source-available?
I guess? Always thought there was some pedantic Stallman-esque argument for the differentiation between FOSS and OSS, independent of the Open Source vs Source Available distinction.
Not sure if you’re able to edit the title, but this doesn’t look like FOSS, just open source.
Ooh! Thanks for the tip! Been looking for some affordable drives for my next system.
I bought a LFF Dell Poweredge back in the fall, and have been waiting on a good deal for 3.5" disks. My current machine is a SFF HP Proliant, and I hate how much a 2.5" drive with good capacity costs.
…and of course Duck Game never got released on GoG
Fuck this greedy bullshit
Does the user running qbittorrent have write access to the downloads directory? Any special messages in the logs?
You might also want to try running qbittorrent through docker. I use https://github.com/DyonR/docker-qbittorrentvpn. Just make sure that you set the PUID and PGUID to match a user id + group id that has r/w access to your downloads directory.
There’s some setting in sonarr/radarr, I think it’s called “remote path mapping” or something. If you have different mounted volume paths between the torrent container and sonarr, you need to set this:
Suppose:
Baremetal host has directory /mnt/myfiles
Your torrent container mounts /mnt/myfiles/torrent_downloads
to /downloads
Your sonarr container mounts /mnt/myfiles/torrent_downloads
to /data/torrent_downloads
and /mnt/myfiles/shows
is mounted to /data/shows
(for copying completed files)
You need a directory mapping to tell sonarr that the path in the torrent container is different from the path sonarr should look. Torrent client says “I have a new show to copy, it’s in /downloads
”. Sonarr doesn’t have /downloads
, but if you set up the path mapping, it knows that /downloads
on the torrent client is actually equivalent to /data/torrent_downloads
in sonarr. Thus, in the sonarr container, it copies the file from /data/torrent_downloads
to /data/shows
.
The user and group id inside the container doesn’t have to match any user on your host machine. It’s possible that user:70 is configured as the user to launch inside the container, in which case you should set the ownership of the directory to match what the container expects.
Eg: The container for my torrent client runs as user 700 group 700. My host machine does not have either of those IDs defined. My torrent directory must be chown’d by 700:700 or else the container can’t read/write torrents.
Agree with the other user that using tar and then zipping the tarball will preserve permissions.
Alternatively, you could open an ssh connection and rsync the files between machines.
Last option would be to export your recipes from the old host, build a new container from scratch on the new one, and import them back again: https://docs.tandoor.dev/features/import_export/
Yep, the problem was that docker started before the NFS mount. Adding the dependency to my systemd docker unit did the trick!