• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.

    We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.

    Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.

    The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.

    (I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)

    As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.

    Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.





  • To expand on @doeknius_gloek’s comment, those categories usually directly correlate to a range of DWPD (endurance) figures. I’m most familiar with buying servers from Dell, but other brands are pretty similar.

    Usually, the split is something like this:

    • Read-intensive (RI): 0.8 - 1.2 DWPD (commonly used for file servers and the likes, where data is relatively static)
    • Mixed-use (MU): 3 - 5 DWPD (normal for databases or cache servers, where data is changing relatively frequently)
    • Write-intensive (WI): ≥10 DPWD (for massive databases, heavily-used write cache devices like ZFS ZIL/SLOG devices, that sort of thing)

    (Consumer SSDs frequently have endurances only in the 0.1 - 0.3 DWPD range for comparison, and I’ve seen as low as 0.05)

    You’ll also find these tiers roughly line up with the SSDs that expose different capacities while having the same amount of flash inside; where a consumer drive would be 512GB, an enterprise RI would be 480GB, and a MU/WI only 400GB. Similarly 1TB/960GB/800GB, 2TB/1.92TB/1.6TB, etc.

    If you only get a TBW figure, just divide by the capacity and the length of the warranty. For instance a 1.92TB 1DWPD with 5y warranty might list 3.5PBW.


  • Not OP, but genuine answer: because I loathe being forced into their way of doing things. Every little thing on the Mac seems engineered with an “our way or the highway” mentality, that leaves no room for other (frequently, better) ways of achieving anything.

    Adding to that, window/task management is an absolute nightmare (things that have worked certain ways basically since System 6 on monochrome Mac Classic machines, and haven’t improved), and despite all claims to the contrary, its BSD-based underpinnings are just different enough to Linux’s GNU toolset to make supposed compatibility (or the purported “develop on Mac, deploy on Linux” workflow) a gross misadventure.

    I just find the experience frustrating, unpleasant, and always walk away from a Mac feeling irritated.

    (For context: > 20 year exclusively Linux user. While it’s definitely not always been a smooth ride, I seldom feel like I’m fighting against the computer to get it to do what I want, which is distinctly not my experience with Apple products)


  • Agree with all these points about the Nexdock.

    We bought a bunch of them at work to be KVM consoles for computers without network out of band management, and at that they excel.

    That said, I don’t think I actually knew it had speakers, wasn’t really part of my use case :)

    It also makes me wish that USB-C connectors on GPUs hadn’t been such a short-term deal, the one-cable hookup is definitely a great thing.