• 2 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle

  • buedi@feddit.detoSelfhosted@lemmy.worldNextcloud appreciation post
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    I run Nextcloud for many, many years. I hosted it for a very long time at Hetzners second lowest tier of Webspace they rent. It was not very fast there (you get what you pay for), but fast enough for our need here. Later I moved it to an Azure VM and after that to my Homeserver where it runs blazingly fast, especially since the last updates they pushed out.

    In all that time I never reinstalled. I just upgraded to the newer versions when they were out. The only times I had problems upgrading was when I was hosting at the cheap Webspace instance at Hetzner and an upgrade process took longer than the PHP timeout my very cheap hosting instance provided. So it was never a fault of Nextcloud, but just that I hosted it on basically the cheapest hosting plan I could find.

    We use it for file sharing, calendar + contacts (+ Sync with DAVx), Notes and of course Talk. For talk to make full use of Voice + Video calls, you should have a TURN Server, but if you do not use that (if you just text) it was running great even on the Webspace instance at Hetzner.

    We are very happy in our family that it exists, that it is free and that it serves us well since many years.


  • You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.


  • I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

    Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

    Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

    Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

    I really started to love Docker, especially in my Homelab.

    Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.



  • I love Traefik! When I started, I tried NGinx, but could not wrap my head around it. So I tried Caddy. Pretty easy to understand andI used it for a while. Then I had demands Caddy could not do ant stumbled uponTraefik. As you said, a learning curve, butfor me much easier than NGinx. I like that you can put the Traefik config inside the Compose files and that the service only is active in Traefik when the actual Containers are up and running. I added Crowdsec to my external facing Traefik instance and even use a plain Traefik instance for all my internal services also. And it can forward http, https, TCP and UDP.









  • I was about to order a SLZB-06, but they were out of Stock. That one looks exactly what I want. I never really looked into NodeRed, but normalizing everything before using it makes sense. The SLZB-06 makes the Zigbee Network connections independent from any Servers and making everything going through MQTT makes you independent from any Software that has to communicate with the Devices. Sounds like a lot of flexibility and independence.


  • Matter and whether external connections are needed or not will be interesting to follow. My HA instance is internal only too, since it does nothing that needs me to access it over the internet. And Owntracks delivers to a separate MQTT instance that will have no internal devices. So my HA is shut off from the internet and I will pay attention to everything Smart Home I will buy, that it does not require an Internet connection too.




  • May I ask what Gateway / Dongle you use? Oh and that weather station sounds interesting, if you happen to have a Manufacturer / Model for me I would like to read up on that too :-)

    I am leaning towards MQTT too because of other solutions that already integrate into that. It looks like a great way to throw different Data into a single pool to make them accessible in the same way, no matter if it is a switch, temp sensor, Camera, GPS Data etc.


  • I am leaning more towards something that might be complex to set up, but has broader support and is future proof. And the latter one seems something that is not really clear for any of the current protocols. Maybe in 10 years things will have settled and everyone uses the same protocol, but who knows what it will be :-) I am leaning towards Zigbee2MQTT for now since my impression is that you are very flexible to do with MQTT. I already use MQTT for Frigate and Owntracks and if other devices put their stuff into MQTT I will I have a pretty open pool for all the data / actions, even if I switch from HA to something else in the future. I feel MQTT is here to stay for a while, but well… that could all be wrong, Haha!


  • Thanks for your input! MQTT is not an issue, I have Mosquitto running on both my installs (one with HA + Frigate on a remote location, the other one HA for me which is also used by Owntracks), so MQTT is not a problem. I will even set up a second one and connect them together to have 2 Brokers for the setup where I need Internet access (Owntracks) to MQTT and I do not want to share this MQTT instance with the devices for my home.

    My impression about Matter was too that it is not “done” yet and device support is poor. On the other hand you read at every corner that it will be the future. This is why the SkyConnect Adapter looked very interesting to me at first, but since most of the features I would use now (Z2M probably, Docker compatibility) do not seem to work yet, or at least not reliably.


  • To be honest, I might have mixed something up reading up on all those standards. After researching a topic, my browser usually ends up having a hundred Tabs “just in case I need that information again” and honestly… in all that information I can not find it specifically. My consensus reading all that information was: “Make sure for each device you buy that it works with your specific gateway, even if it says it works with protocol X”.