• 0 Posts
  • 8 Comments
Joined 11 months ago
cake
Cake day: July 28th, 2023

help-circle


  • Doubletwist@lemmy.worldtoSelfhosted@lemmy.worldSoftware vs Hardware RAID
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    5 months ago

    Y’all must be doing something wrong because HW raid has been hot garbage for at least 20years. I’ve been using software raid (mdadm, ZFS) since before 2000 and have never had a problem that could be attributed to the software raid itself, while I’ve had all kinds of horrible things go wrong with HW raid. And that holds true not just at home but professionally with enterprise level systems as a SysAdmin.

    With the exception of the (now rare) bare metal windows server, or the most basic boot drive mirroring for VMware (with important datastores on NAS/SAN which are using software raid underneath, with at most some limited HW assisted accelerators) , hardly anyone has trusted hardware raid for decades.





  • Mostly in a state of stability at the moment.

    I recently migrated off of a pair of ESXi servers, and consolidated down to just put my VMs on my TrueNAS Scale server, primarily to save power and generation so that I would only be running two servers instead of four. It’s not as fancy or flexible, but the VMs run and do what I need.

    So now my lab consists of:

    • 1 Dell R620 w/8x 1TB HDD in RAIDZ2 - backups (backuppc)
    • 1 Dell R620 w/8x 2TB SSD in RAIDZ2 + JBOD w/16x 3TB HDD in 2xRAIDZ2 pool - NAS + VMs for Plex, k8s, ansible/terraform, etc. (TrueNAS Scale)
    • Unifi UDMPro + 3APs
    • Unifi 10Gb Aggregation Switch
    • Unifi 24-port POE switch standard

    I then have another pair of R620s, plus 2 more JBOD trays and disks as cold spares. I may run the servers some during the winter, but it’s too hot in the garage closet in the summer to run them all without additional cooling.