

Now now, they’re not just inconsiderate assholes and leeches.
They’re inconsiderate nazi oligarch assholes and leeches.
Now now, they’re not just inconsiderate assholes and leeches.
They’re inconsiderate nazi oligarch assholes and leeches.
There’s no such thing as too much seeding.
Well, maybe the 85tb of Ubuntu 24.04 I’ve done is too much, but I mean, whatever.
(I’ve got basically everything I’ve downloaded in the last 7 years seeding, some 6000 torrents. qBittorrent isn’t the most happy with this, but it’s still working, if using a shit-ton of RAM at this point.)
10940X
“They say”, but they’re right. Ryzen chips do have worse idle power usage, but you’re talking about 10w or so, at most.
And uh, if you were looking at an X-series CPU, I can’t see how that 10w is a dealbreaker, because you were already looking at a shockingly inefficient chip.
Debian stable is great: it’s, well, stable. It’s well supported, has an extremely long support window, and the distro has a pretty stellar track record of not doing anything stupid.
It’s very much in the install-once-and-forget-it category, just gotta do updates.
I run everything in containers for management (but I’m also running something like 90 containers, so a little more complex than your setup) and am firmly of the opinion that, unless you have a compelling reason to NOT run something in a container, just use the containerized version.
Snapraid parity is offline, so that’s not strictly accurate.
You build the parity, and then until you do a sync/scrub/rebuild they don’t do shit, so there’s no reason to keep them spun up.
If the drive are slow spinning up, then this is probably not a fatal concern, but there’s zero details here.
That could probably work.
Were it me, I’d build a script that would re-hash and compare all the data to the previous hash as the first step of adding more files, and if the data comes out consistent, I’d copy the files over, hash everything again, save the hash results elsewhere and then repeat as needed.
Yeah I figured that’s what you were trying to figure out, since I 100% went through the same thought process, lol.
I just bought a Mac Mini instead of moving to Linux on the desktop, and am pretty happy with the outcome (everything works) but that’s not a solution for everyone.
I can answer your question: Resolve are very clear that Intel iGPUs are not supported in Linux, at all, because the Intel Linux drivers do not support some features they require.
Free version, paid version: doesn’t matter, it’s not supported hardware right now. Not even the new ARC cards are, because it’s a software issue Intel has to fix.
Ran into this when looking at moving to Linux and there’s not a solution for it.
The format is the tape in the drive, or the disk or whatever.
Tape existed 50 years ago: nothing modern and in production can read those tapes.
The problem is, given a big enough time window, the literal drives to read it will simply no longer exist, and you won’t be able to access even non-rotted media because of that.
As for data integrity, there’s a lot of options: you can make a md5 sum of each file, and then do it again and see if anything is different.
The only caveat here is you have to make sure whatever you’re using to make the checksums gets stored somewhere that’s not JUST on the drive because if the drive DOES corrupt itself, and your only record of the “good” hashes is on the drive, well, you can’t necessarily trust those hashes either.
So, 50 years isn’t a reasonable goal unless you have a pretty big budget for this. Essentially no media is likely to survive that long and be readable unless they’re stored in a vault, under perfect climate controlled conditions. And even if the media is fine, finding an ancient drive to read a format that no longer exists is not a guaranteed proposition.
You frankly should be expecting to have to replace everything every couple of years, and maybe more often if your routine tests of the media show it’s started rotting.
Long term archival storage really isn’t just a dump it to some media and lock it up and never look at ever again.
Alternately, you could just make someone else pay for all of this, and shove all of this to something like Glacier and make the media Amazon’s problem. (Assuming Amazon is around that long and that nothing catches fire.)
I’m using blu-ray disks for the 3rd copy, but I’m not backing up nearly as much data as you are.
The only problem with optical media is that you should only expect it to be readable for a couple of years, best case, at this point and probably not even that as the tier 1 guys all stop making it and you’re left with the dregs.
You almost certainly want some sort of tape option, assuming you want long retention periods and are only likely to add incremental changes to a large dataset.
Edit: I know there’s longer-life archival optical media, but for what that costs, uh, you want tape if at all possible.
Change your port definitions so that they’re only binding to localhost, like so:
That’ll stop access from anywhere but the local host. You’ll have to redo your reverse proxy configuration to use 127.0.0.1 instead of whatever you’re using now, though.
Also, isn’t the preferred nomenclature actually a ‘m’lord’?
Perhaps I’ve misunderstood something.
When I was a wee kid, I thought that scene from the Matrix where Morpehus explains that humans destroyed the whole damn planet just to maybe slow down the machines was stupid.
I mean if you block the sun, we’re all going to fucking die, why would you do something that stupid?
Yeah, well, the last few years has shown that actually at least half the people on the planet would be pro-kill-everything, even if that includes themselves.
So really, this take isn’t remotely shocking anymore.
Oh I wasn’t saying to not, I was just saying make sure you’re aware of what recovery entails since a lot of raid controllers don’t just write bytes to the disk and can, if you don’t have spares, make recovery a pain in the ass.
I’m using MD raid for my boot SSDs and yeah, the install was a complete pain in the ass since the debian installer will let you, but it’s very much in the linux sense of ‘let you’: you can do it, but you’re figuring it out on your own.
Buy multiple drives, setup some sort of raid, setup some sort of backup. Then set up a 2nd backup.
Done.
All drives from all manufacturers are going to fail at more or less the same rate (see: backblaze’s stats) and trying to buy a specific thing to avoid the death which is coming for all drives is, mostly, futile: at the absolute best you might see a single specific model to avoid, but that doesn’t mean entire product lines are bad.
I’m using some WD red drives which are pushing 8 years old, and some Seagate exos drives which are pushing 4, and so far no issues on any of the 7 drives.
Make sure, if you use hardware RAID, you know what happens if your controller dies.
Is the data in a format you can access it easily? Do you need a specific raid controller to be able to read it in the future? How are you going to get a new controller if you need it?
That’s a big reason why people nudge you to software raid: if you’re using md and doing a mirror, then that’ll work on any damn drive controller on earth that linux can talk to, and you don’t need to worry about how you’re getting your data back if a controller dies on you.
Very very little. It’s a billion tiny little bits of text, and if you have image caching enabled, then all those thumbnails.
My personal instance doesn’t cache images since I’m the only one using it (which means a cached image does nobody any good), and i use somewhere less than 20gb a month, though I don’t have entirely specific numbers, just before-lemmy and after-lemmy aggregates.
Huh.
Usually when I run into that I just bounce the Portainer container and it sorts shit out.
Maybe that’s actually causing the tokens to rotate/expire and thus doing the same shit?
Universiality, basically: almost everyone, everywhere has an email account, or can find one for free. As well as every OS and every device has a giant pile of mail clients for you to chose from.
And I mean, email is a simple tech stack and well understood and reliable: I host an internal mail server for notifications and updates and shit, and it’s rapid, fast, and works perfectly.
It’s only when you suddenly need to email someone OTHER than your local shit that it turns to complete shit.