If you have an Intel CPU with quicksync, it will likely perform better than the 1060 in terms of visual quality, if its coffee lake or newer (8th gen).
If not, well, it’ll be fine up to whatever the stream limit is (4?).
If you have an Intel CPU with quicksync, it will likely perform better than the 1060 in terms of visual quality, if its coffee lake or newer (8th gen).
If not, well, it’ll be fine up to whatever the stream limit is (4?).
Fair, but he said he wants to move from Windows to Linux, so I just assumed there wasn’t going to be any of those since, well, they’re not going to run in Linux anyways.
Not in a way you’re probably going to like.
You could set up a bare metal hypervisor on the system and set up a VM for your NAS, Windows, and Linux and swap between them as needed, but uh, that’s not really an exceedingly pleasant desktop use case, for a number of reasons, one of which is that you really won’t have the normal ‘sit down, and use the computer’ desktop experience.
Alternate option: run the NAS and either the Linux or Windows install in a VM, and keep it booted into, say, the desktop Linux environment with everything else being a virtualized setup.
Since android apps are required, I’d maybe go about this another way: find the app you like the most, then stand up whatever backend it uses for sync.
I was already in the FreshRSS ecosystem, but man, I don’t really like any of the android apps on offer, but swapping at this point would be annoying (bookmarks, saved stories, etc.)
good ideia to run restic as root
As a general rule, run absolutely nothing as root unless there’s absolutely no other way to do what you’re trying to do. And, frankly, there’s maybe a dozen things that must be root, at most.
One of the biggest hardening things you can do for yourself is to always, always run everything as the lowest privilege level you can to accomplish what you need.
If all your data is owned by a user, run the backup tool as that user.
If it’s owned by several non-priviliged users, then you want to make sure that the group permissions let you access it.
As a related note, this also applies to containers and software you’re running: you shouldn’t run docker containers as root unless they specifically MUST have a permission that only root has, and I personally don’t run internet facing ones as the same user as all the others: if something gets popped, then they not only do not have root permissions, but they’re also siloed into their own data in the event of a container escape.
My expectation is that, at some point, I’ll miss a CVE and get pwnt, so the goal is to reduce how much damage someone can do when that happens, rather than assume I’m going to be able to keep it from happening at all, so everything is focused on ‘once this is compromised, how can i make the compromise useless to the attacker’.
Unifi Gateway Ultra
How have you liked the gateway? Any stupid decisions that have annoyed?
My USG has decided that, after a decade, it’s going to be flaky and crash if it wants to (even after replacing it’s 4th dead PSU and 2nd USB stick) and I’m thinking it’s probably time to upgrade.
I’ll admit to both liking the Unifi ecosystem and firmly not trusting the Unifi ecosystem one damn bit, which is bit of a weird situation where I’ve been really really unwilling to upgrade anything because that hasn’t always gone uh, smoothly.
My comment was more FDM vs resin support removal, and that it’s not like resin is all sunshine and rainbows.
If anything, modern tree supports for FDM have fixed the giant-blob-of-plastic problem with supports you’d previously get on smaller models, where you’d end up with, uh, well, a giant blob of plastic stuck to an arm or a sword or whatever.
Still not fantastic, but until someone figures out antigravity, it’s what it is.
print with supports, but removing supports from such thin, fragile bits of a model is nigh impossible without doing damage
Removing resin supports is worse, if anything.
They leave little bumps where they’re cut off that you have to then try to VERY VERY gently sand off without bending or breaking said fiddly models.
You could also use nginx if you wanted; it’ll do arbitrary tcp data with the stream plugin.
I mean, recovery from parity data is how all of this works, this just doesn’t require you to have a controller, use a specific filesystem, have matching sized drives or anything else. Recovery is mostly like any other raid option I’ve ever used.
The only drawback is that the parity data is mostly equivalent in size to the actual data you’re making parity data of, and you need to keep a couple copies of indexes since if you lose the index or the parity data, no recovery for you.
In my case, I didn’t care: I’m using the oldest drives I’ve got as the parity drives, and the newer, larger drives for the data.
If i were doing the build now and not 5 years ago, I might pick a different solution but there’s something to be said for an option that’s dead simple (looking at you, zfs) and likely to be reliable because it’s not doing anything fancy (looking at you, btrfs).
From a usage (not technical) standpoint, the most equivalent commercial/prefabbed solution would probably be something like unraid.
A tool I’ve actually found way more useful than actual raid is snapraid.
It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it’ll happily tell me if something funky has happened.
It’s been super useful on my NAS, where it’s the only thing standing between my pile of random drives and data loss.
There’s a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it’s a fairly viable option.
I just uh, wrote a bash script that does it.
It dumps databases as needed, and then makes a single tarball of each service. Or a couple depending on what needs doing to ensure a full backup of the data.
Once all the services are backed up, I just push all the data to a S3 bucket, but you could use rclone or whatever instead.
It’s not some fancy cool toy kids these days love like any of the dozens of other backup options, but I’m a fan of simple and well, a couple of tarballs in a S3 bucket is about as simple as it gets since restoring doesn’t require any tools or configuration or anything: just snag the tarballs you need, unarchive them, done.
I also use a couple of tools for monitoring the progress and a separate script that can do a full restore to make sure shit works, but that’s mostly just doing what you did to make and upload the tarballs backwards.
I’m finding 8 years to be pretty realistic for when I have drive failures, and I did the math when I was buying drives and came to the same conclusion about buying used.
For example, I’m using 16tb drives, and for the Exos ones I’m using, a new drive is like $300 and the used pricing seems to be $180.
If you assume the used drive is 3 years old, and that the expected lifespan is 8 years, then the used drive is very slightly cheaper than the new one.
But the ‘very slight’ is literally just about a dollar-per-year less ($36/drive/year for used and $37.50/drive/year for new), which doesn’t really feel like it’s worth dealing with essentially unwarrantied, unknown, used and possibly abused drives.
You could of course get very lucky and get more than 8 years out of the used, or the new one could fail earlier or whatever but, statistically, they’re more or less equally likely to happen to the drives so I didn’t really bother with factoring in those scenarios.
And, frankly, at 8 years it’s time to yank the drives and replace them anyways because you’re so far down the bathtub curve it’s more like a slip n’ slide of death at that point.
I’m going to get downvoted to hell for this but uh, I usually tell clients Squarespace is what they want these days.
Self-hosting something like Wordpress or Ghost or Drupal or Joomla or whatever CMS you care to name costs time: you have to patch it, back it up, and do a lot of babysitting to keep it up and secure and running. It’s very much not a ship-and-forget - really, nothing selfhosting is.
I’m very firmly of the opinion that small business people should be focused on their business, not their email or website or whatever, because any time you spend fighting your tech stack is time you could have been actually making money. It’s all a cost, it just depends if you value $20 a month or your time more.
If I had someone come to me asking to setup this stuff for their business, I’d absolutely tell them to use gSuite for email, file sharing, documents, and such and Squarespace for the website and then not worry about shit, because they’re both reliable and do what they say on the tin.
They state the code will be released after the first orders ship, which makes a certain kind of sense given this is a competitive space suddenly.
Though, I 10000% agree that there’s no reason to take a leap of faith when you can just wait like, uh, a month, and see what they do after release. It’s not like they won’t still be selling these or something.
Right, but you’re pulling way more power than the homeserver I’m running is, and at 10-15w it’s doing frigate + openvino based (on the igpu) identification on 4 cameras, usually 2 jellyfin streams at any given time, 4 VMs, home assistant, and ~80 other containers plus a couple of on-host services for NAS duties (smb, nfs, ftp, afp, nginx, etc.)
I was just surprised that a Ryzen U-series chip would be worse re. power usage.
You know, I think I did the thing I always do and forget how bad the idle power for Ryzen cpus are due to how they’re architected.
Like, my home server is a 10850k, which is a CPU known for using 200+w… except that, of course, at idle/normal background loads it’s sitting at more like 8-15w. I did some tweaking to tell it to both respect it’s TDP and also adjusting turbo boost to uh, don’t, but still: it’s shockingly efficient after fiddling.
I wouldn’t have expected a 5500u to sit at 30w under normal loads, but I suppose that depends on the load?
Ryzen 7, so definitely not a low power
It’s a laptop chip, and a 15w one at that so I wouldn’t exactly say that’s high-power.
I don’t really think it’s necessarily a deal breaker, but it’s caused a lot of people a lot of nagging little issues and might be worth making sure you’re not going to run into anything.
I’m super stoked at the appearance of the nas appliance form factor with hardware that’s got performance that isn’t rotten potato level.
Next rebuild I do is certainly going to be one of these things, though that’s probably a billion years away since my current nas is hilariously overpowered.
Yeah quicksync won’t help you there.
I thought nVidia’s limit was enforced by their drivers, but that’s probably changed since it’s been a while since I looked at nvenc as a solution (quicksync, then an ARC card over here).