• 0 Posts
  • 18 Comments
Joined 3 months ago
cake
Cake day: March 20th, 2024

help-circle

  • Let’s see…

    My servers (tiny/mini/micros) in total are about… 600W or so. Two NASs, about 15-20W a piece.

    I spend a out $150/mo in electricity, but my hot water/HVAC/etc are the big power draw. I’d say about $40-50/mo is what I’m spending on powering the servers in my office.

    Definitely puts off some heat, but that’s partially because it’s all in one rack, and I’ve got a bunch of other work hardware in there. It’s about 2 degrees warmer in my office than the rest of my home, but I also have air cycling all the time since it’s a single unit HVAC and I need to keep the air moving to keep it all the right temp in the other rooms anyway (AC will come on more often otherwise, even without my rack).




  • So a few comments…

    • I’m not a fan of Ubuntu server, in part because their distribution of docker through snap can conflict with snap from the docker repo. My preference here is either Debian or Proxmox (debian + great virtualization setup). Mint is good, though I like LMDE (Debian edition) more, in part because I prefer Debian in general.
    • You may want to check out dockge. You do need to have docker running for it, but it’s a simple setup, and it has a clean interface for docker compose. Good for getting used to it imo.
    • grub has no part in docker, so it’s something else hanging.
    • What are the exact errors when you enter “docker-compose up”?
    • what is in your docker-compose for each of these?



  • For lots of services that require little CPU and ram, I use tiny/mini/micro PCs, bought used. I get them for anywhere from $100-$400, and usually all I do is drop in an SSD. That includes Linux VMs when I’m testing distros or deployment on a distro, since 32gb ram on the host is more than enough to leave 4-8gb ram to the VM.

    For some heavier applications, I also have a 4RU case stacked with drives, which I use as a third NAS (VM with drives passed through), large DBs, etc. Its just a 1700x with 64GB ram, and that’s plenty.

    For most things (DNS, a few web servers, git, grafana, Prometheus, rev proxies, Jenkins, personal fdroid repo, homepage, etc) I just use the tiny/mini/micro’s. Imo, you can’t go wrong with those for your services, and a big case with spare parts and lots of drives for your NAS. Especially at the price you mentioned. Just remember you can separate your services easily, so don’t focus on getting everything in one spot, you can make your requirements (and cost) go up quickly.









  • So just… Waiting for failure then? ;)

    As for me, give me an HA cluster and I don’t care if I need to reinstall. I don’t need to worry about an additional point of failure (USB drive) that is almost always going to fail before any of the other hardware.

    It’s part of why absolutely nothing important ever runs on a raspberry Pi for me though, SD cards are no better.

    Now as for my favorite example of why I don’t do it in production? Someone doing a bit of minor maintenance in the rack, accidentally pressed against a box running esxi off USB (on a gen 6 HP for rough timeline), broke the drive.

    The backup? Well, it had corrupted, and wouldn’t boot.