![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
What makes session less secure? This is the first I’ve heard of it.
What makes session less secure? This is the first I’ve heard of it.
It’s normal for it to heat up under load, but it’s not normal for it to be under load 24/7 indefinitely.
Sounds like somebody’s mommy needs to take their phone away.
Why is that? Does the motherboard effectively just not have enough inputs for all the disks, so that’s why you need dedicated hardware that handles some kind of raid configuration, and in the end the motherboard just sees it all as one drive? I never really understood what SCSI was for. How do the drives connect, SATA/PATA/something else?
Modern Android Do Not Disturb is configurable enough for you to do this. Allow your family contacts through, block the rest.
Just a note to people posting these cool graphs that it does give away your rough location. As long as you’re cool with that (pun intended).
You’re saying you see a bunch of login attempts on your router, but you don’t think they actually got into it?
Assuming this is in the US, aren’t washing machines normally on a 20A circuit? Or even 240V?
I would assume that wouldn’t cause so much contention that the system is unusable, though, right? Unless they’re busy waiting.
Sounds like OP is asking about file storage. Video streaming could be spotted using info leaked regarding traffic behavior. But uploading an encrypted file for storage shouldn’t leak anything except the size.
[yuzu agrees to] hand over any “physical circumvention devices” and “modified Nintendo hardware” to Nintendo.
You will own nothing and like it.
What should happen is a judge should recognize that Nintendo’s behavior is categorically worse for consumers: not allowing them to modify their own hardware, not allowing them to make copies of their own data, not allowing users to build their own hardware and software on which to execute their own data in a way that creates a demonstrably better experience for the consumer. You know, innovation.
What will happen is that money is the judge and jury, and if corporations don’t get every last million they want out of consumers, they’ll sue it out of them as “damages”.
I would say vote with your wallet, but gamers have never once exhibited a backbone. The next pokemon game could just boot up to a low-res jpeg of pikachu that crashes after 20m, and it would still make millions. And Nintendo knows it.
Wait what? I’m saying what you said is correct. Am I the one who’s confused here?
Edit: oh maybe you meant that’s the excuse people give for being wrong? lol
More specifically, the container is run on bare metal if the host is running on bare metal. You are correct in this thread, not sure why you’re being downvoted. I guess people don’t know what virtualization technology is or when it is used.
If the nextcloud container is slow, it’s for reasons other than virtualization.
It’s all about where the packages and services are installed
No. Your packages and services could be on a network share on the other side of the world, but where they are run is what matters here. Processes are always loaded into, and run from main memory.
“Running on bare metal” refers to whether the CPU the process is being run on is emulated/virtualized (ex. via Intel VT-x) or not.
A VM uses virtualization to run an OS, and the processes are running within that OS, thus neither is running on bare metal. But the purpose of containers is to run them wherever your host OS is running. So if your host is on bare metal, then the container is too. You are not emulating or virtualizing any hardware.
Here’s an article explaining the difference in more detail if needed.
As the other person said, I don’t think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if you’ve written data to a certain location, and it could be smart enough to know how often you’re writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you don’t wear it out.
I say “could” because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.
It’s also worth noting that drives have an unreported space of “spare sectors” that it can use if it detects one has gone bad. I don’t know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.
Seriously? Why be like this? It feels like a Lemmy thing for people to have a chip on their shoulder all the time.
You shared your understanding, and then I shared mine (in fewer words). I also summarized in once sentence at the bottom. Was just trying to have a conversation, sorry.
Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.
Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there’s still perfectly good storage at that address, albeit with a potential one-off read error.
The larger sizes SSD just gives the firmware more extra blocks to pull from.
Assume your hard drives will fail. Any time I get a new NAS drive, I do a burn-in test (using a simple badblocks
run, can take a few days depending on the size of the drive, but you can run multiple drives in parallel) to get them past the first ledge of the bathtub curve, and then I put them in a RaidZ2 pool and assume it will fail one day.
Therefore, it’s not about buying the best drives so they never fail, because they will fail. It’s about buying the most cost effective drive for your purpose (price vs avg lifespan vs size). For this part, definitely refer to the Backblaze report someone else linked.
Simplex is the first platform I’ve heard of that doesn’t use IDs (which doesn’t make much sense to me, practically, but sure). So would you say everything is less secure than simplex?