Use object storage for media and backups, then use s3 replication to put a copy somewhere else.
Use object storage for media and backups, then use s3 replication to put a copy somewhere else.
If you have enough users and systems that this is a problem then you should be centrally managing it. I get that you want to inventory what you have, but I’m saying that you’re probably doing it wrong right now, and your ask is solved by using a central IAM system.
It sounds like you’re probably looking for some kind of SAML compliant IAM system, where credentials and access can be centrally managed. Active Directory and LDAP are examples of that.
Well, 1ms of latency is 300km of distance, so unless you have something really misconfigured or overloaded, or you’re across the country, latency shouldn’t be an issue. 10-20ms is normally the high water mark for most synchronous replication, so you can go a long way before a protocol like DNS becomes an issue.
I find a lot of stuff is using docker compose, which works with Podman, but using straight docker is easier, especially if it’s nothing web-facing
Yes, but you don’t need Kubernetes from the start.
Use object storage and enable immutability for the backups. If they compromise your site they shouldn’t be able to delete your backups unless they have the NAS password too.
Script that checks your external IP and updates your DNS provider via API.
Not big, but I have a few degenerate hoarding friends I mooch off of
Erasure coding may be a better option than RAID.
An HP EliteDesk G4 mini desktop Is around that much
Does your work IT department not manage their laptops? Are they not going to come asking why your system is unmanageable?
12 hours of trying to overclock a mid 2000s desktop as much as I can
Object storage is really popular for backups now because you can make it immutable by protocol standard and with erasure coding you can have fault tolerance across locations.
The problem with external LUNs is that they’re out of the control of the hypervisor. You can run into situations where migration events can cause access issues. If you can have the data presented through the hypervisor it will lead to less potential issues. Using object or NFS are also good options, if available.
Don’t present LUNs to VMs is the best practice. The only time you should do it is if you’re absolutely forced because you’re doing something that doesn’t support better shared disk options.
Fan limit switches are airspeed based, at least the ones I used. They’re used to make sure things like furnace blowers are actually working, otherwise cut off the heat.
The simple way is to use a fan or blower limit switch. The more complicated way would be to use a mass airflow sensor and something like a Raspberry Pi to control the speed of your hood vent.
Save your files to a local s3 object storage mount, enable versioning for immutability and use erasure coding for fault tolerance. You can use Lustre or some other S3 software for the mount. S3 is great for single user file access. You can also replicate to any cloud based S3 for offsite.
There’s not much cost with S3 object. It’s just a file system in Linux, and replication is a protocol standard.