• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.


  • Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.


  • For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

    If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won’t help in the static content scenario. You’ll need to do something like a CDN, and those like to consume straight directory trees, not containers.

    For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.


  • But it you already have an nginx or other web server otherwise required to start up (which is in all likelihood the case), you don’t need any more auto startup, the “reverse proxy” already started can just serve it. I would say that container orchestration versioning can be helpful in some scenarios, but a simple git repository for a static website is way more useful since it’s got the right tooling to annotate changes very specifically on demand.

    That reverse proxy is ultimately also a static file server. There’s really no value in spinning up more web servers for a strictly static site.

    Folks have gone overboard assuming docker or similar should wrap every little thing. It sometimes adds complexity without making anything simpler. It can simplify some scenarios, but adding a static site to a webserver is not a scenario that enjoys any benefit.


  • Because serving static files doesn’t really require any flexibility in web serving code.

    If your setup has an nginx or similar as a reverse proxy entry point, you can just tell it to serve the directory. Why bother making an entire new chroot and proxy hop when you have absolutely zero requirements beyond what the reverse proxy already provides. Now if you don’t have that entry point, fine, but at least 99% of the time I see some web server as initial arbiter into services that would have all the capability to just serve the files.


  • WSL may be fine for a Windows user to get some access to Linux, however for me it misses the vast majority of what I value in a desktop distribution -Better Window managers. This is subjective, but with Windows you are stuck with Microsoft implementation, and if you might like a tiling window manager, or Plasma workspaces better, well you need to run something other than Windows or OSX.

    -Better networking. I can do all kinds of stuff with networking. Niche relative to most folks, but the Windows networking stack is awfully inflexible and frustrating after doing a lot of complex networking tasks in Linux

    -More understanding and control over the “background” pieces. With Windows doing nothing a lot is happening and it’s not really clear what is happening where. With Linux, it can be daunting like Windows, but the pieces can be inspected more easily and things are more obvious.

    -Easier “repair”. If Windows can’t fix itself, then it’s really hard to recover from a lot of scenarios. Generally speaking a Linux system has to be pretty far gone

    -Easier license wrangling. Am I allowed to run another copy of Windows? Can I run a VM of it or does it have to be baremetal? Is it tied to the system I bought with it preloaded, or is it bound to my microsoft account? With most Linux distributions, this is a lot easier, the answer is “sure you can run it”.

    -Better package management. If I use flatpak, dnf, apt, zypper, or snap, I can pretty much find any software I want to run and by virtue of installing in that way, it also gets updated. Microsoft has added winget, which is a step in the right direction, but the default ‘update’ flow for a lazy user still ignores all winget content, and many applications ignore all that and push their own self-updater, which is maddening.

    The biggest concern, like this thread has, is that WSL sets the tone for “ok, you have enough Linux to do what you need from the comfort of the ‘obviously’ better Microsoft ecosystem” and causes people to not consider actually trying it for real.