• 15 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • In the US at least, most equipment (unless you get into high-and datacenter stuff) runs on 120V. We also use 240V power, but a 240V connection is actually two 120V phases 180-degrees out of sync. The main feed coming into your home is 240V, so your breaker panel splits the circuits evenly between the two phases. Running dual-phase power to a server rack is as simple as just running two 120V circuits from the panel.

    My rack only receives a single 120V circuit, but it’s backed up by a dual-conversion UPS and a generator on a transfer switch. That was enough for me. For redundancy, though, dual phases, each with its own UPS, and dual-PSU servers are hard ro beat.


  • When I use OpenSpeedTest to to test to another VM, it doesn’t read or write from the HDD, and it doesn’t leave the Proxmox NIC. It’s all direct from one VM to another. The only limitations are CPU are perhaps RAM. Network cables wouldn’t have any effect on this.

    I’m using VirtIO (paravirtualized) for the NICs on all my VMs. Are there other paravirtualization options I need to be looking into?




  • I have heard the same thing about ACs, but I think it depends on the unit. The window units that I use have a switch on the front that literally just turns them off; there’s no delay time for the compressor. It’s the same as pulling the plug.

    I’ve used single and dual-hose portable ACs in the past, and I only have dual-hose units now. This is purely anecdotal, but when I had single-hose units, they would maintain the temperature throughout the day as it warmed up, but they didn’t do a great job of cooling. A dual-hose AC with a similar capacity was actually able to lower the temperature.



  • I did some research on this, and it turns out you’re absolutely correct. I was under the impression that ECC was a requirement for a ZFS cache. It does seem like ECC is highly recommended for ZFS, though, due to the large amount of data it Storrs in memory. I’m not sure I’d feel comfortable using non-ECC memory for ZFS, but it is possible.

    Anecdotally, I did have one of my memory modules fail in my TrueNAS server. It detected this, corrected itself, and sent me a warning. I don’t know if this would have worked had I been using non-ECC memory.


  • One thing to keep in mind if you go with an i5 or i7 is that you won’t have the option to use ECC memory. If you’re running TrueNAS, you’ll need ECC memory for the ZFS cache. A Xeon E5 v2 server is old, but still has a more than enough power for your use case, and they’re not particularly expensive.

    If you need something more powerful, you can find some decent Xeon Gold systems on eBay, but they’ll be a bit more pricey. The new Xeon W chips are also an option, but at least for me, they’re prohibitively expensive.



  • I decided to give up on it. Looking through the docs, they recommend that due to “reasons,” it should be restarted at least daily, preferably hourly. I don’t know if they have a memory leak or some other issue, but that was reason enough for me not to use it.

    I installed TubeArchivist, and it suits my needs much better. Not only do I get an archive of my favorite channels, but when a new video is released, it gets automatically downloaded to my NAS and I can play it locally without worrying about buffering on my painfully slow internet connection.





  • Thanks! I really enjoyed setting it up. The main part of the automation consists of two template sensors: One that monitors temperature, humidity, illuminance, and wind speed and produces an “apparent” temperature. The other monitors the apparent temperature, does all the math and colorspace conversions, and produces RGB values for the bulb. It was by far the most difficult automation to set up, but it was a great way to get better and programming templates.


  • I have a motion sensor that turns on the bathroom light when you enter, then turns it off after no motion is detected for 5 minutes. Works great except for those long sessions on the toilet; nobody likes to poop in the dark. Now I have a door sensor on the bathroom so when no motion is detected, it turns off the lights after 5 minutes unless the door is closed, then it’s 30 minutes. Much better than having to wave your arms around every 5 minutes when you’re trying to take care of business.



  • I use a lot of sunset/sunrise automations, but one of my favorite is for the lights on the main floor of the house. They turn on with a motion/presence sensor when someone enters the room, but the intensity of the light and the color of the light are controlled by the sunrise/sunset times. During the day, they turn on at 100% brightness and a cool white; starting 1 hour before sunset and 1 hour before sunrise, they scale between 100%/cool white and 60%/warm white. You don’t notice it unless you’re really paying attention, but it really helps the house feel a lot more “cozy.”


  • I end up appreciating my once-in-awhile automations more.

    I can definitely understand this. One of my automations monitors a water sensor at the base of my water heater and under the kitchen sink (has leaked before). If it triggers, all my lights in the house turn on, any colored bulbs turn red, and they flash on/off each second. It hasn’t triggered yet (thankfully) except for testing, but the peace of mind is fantastic.


  • I’m strongly in favor of keeping things compartmentalized. I have two main servers: One is a Proxmox host with a powerful CPU and a few hard drives set up in a fast but not-so redundant array (I use ZFS, but my setup is similar to RAID10). Then a have second server that runs TrueNAS; the CPU is slower, but it has a large amount of storage (120TB physical) arrayed in an extremely fault-tolerant configuration.

    My Proxmox box runs every service on my network, but all that gets stored the hard drives are the main boot disks. It backs up daily, so I’m not so concerned about drive failure. All my data is stored on the NAS, and it’s shared with the VMs via NFS, SMB, or iSCSI, depending on which is more appropriate.

    For you, I’d recommend building a NAS, and keep all your important data there. Your NUC can host your services, and they can pull data from the NAS. The 256GB on your NUC will be more than enough to host whatever services you need.



  • 4 Mbit is exceptionally slow by today’s standards; when I signed up for internet access (there’s only one provider available where I live), I told them “I will pay for whatever the fastest connection is that you can offer.” Turns out that’s just single-channel DSL. They won’t even install bonded DSL where I live, and believe me, I’ve tried. I do have Starlink as well, but because of the land around me, it’s always going to be obstructed by the land topology; when I calculated how high I would need to raise my antenna to avoid obstructions, it was several hundred feet. My pfSense box does a good job of routing traffic between my DSL connection and my Starlink connetion (and falling back when Starlink is obstructed), but for hosting anything, I need a stable connection. That leaves me with just my DSL connection.


  • I’m a big fan of Jellyfin. I run it at home with a dedicated Nvidia A2000 for hardware transcoding. It’s able to transcode multiple 4k streams with tonemapping faster than they can play.

    As much as I’d love to use Jellyfin, there are two major issues: My internet connection is so slow, that I’d be lucky to stream 720p at a low bitrate. I’d spend the money on a faster connection, but I live in an area that doesn’t even get cell phone service. My options are DSL and Starlink, and I have both; the DSL is just slow, and Starlink uplink speed isn’t much better, plus I have plenty of obstructions that make it somewhat unreliable. The second problem is that Jellyfin has too steep of a learning curve. Telling my relatives “oh, if it starts buffering, just lower the bitrate” isn’t an option. Not to mention, I’d have to run it on a VPS, and hosting a VPS with the resources required for this is way too expensive for me.



  • I’ve swapped out at few of my Zigbee devices in the past, and even though I’ve deleted the original device, HA will add an “_2” to the entity ID, which breaks any automation that uses it, even if the friendly name remains the same. The only time I’ve seen this not happen is when a device drops off the network and I re-pair it. Is there a trick to making this work? Even if I don’t switch to Z2MQTT, this would be really useful to know. I have a few unreliable cheap door sensors that I’d like to replace, but they’re tied to so many automations that I’ve been dragging my feet on it.


  • When you talk about the limits of ZHA, what are you referring to exactly? It would probably take an entire weekend for me to re-pair all the devices on my Zigbee network, but I’m not completely opposed to the idea of I gain some functionality that I didn’t have before.

    Based on what I read when I first set up HA, it seems like ZHA was somewhat lacking for quite some time but is now essentially equivalent to Z2MQTT. I went with ZHA because it seemed like the “default” for Zigbee.


  • corroded@lemmy.worldtoSelfhosted@lemmy.worldWhy docker
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    6 months ago

    My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I’d have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.

    I just don’t get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.