![](https://lemmy.sdf.org/pictrs/image/de62cf72-6a10-49cb-9ae4-1ef305ab7070.png)
![](https://lemmy.dbzer0.com/pictrs/image/a18b0c69-23c9-4b2a-b8e0-3aca0172390d.png)
Can confirm AirVPN works
Can confirm AirVPN works
It’s cool, you’re sleepy. We get it
Yeah, I mean I use them with Bazarr, so I don’t deal with their UI, it’s all automated. I’m not saying they’re convenient, but it is public and widely used, so if your goal is your subtitles helping others, it’s a good option.
I mean, opensubtitles still works fine
So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.
But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.
I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.
Besides, there are better reasons/ways to fight the system than helping other people avoid learning.
It’s also real scary to think since I’d assume most of us Gen Z-ers aren’t properly able to object to privacy eroding tech bills because we’re too tech illiterate to understand the impacts.
Millennial here, putting my tinfoil hat on for a minute:
This is exactly what the big tech corpos wanted all along. They’ve been curving the arc of history towards people at large being digitally dependent but incapable of self-service. They want addicts, not citizens. Serfs, not an educated populace.
In the 70s, 80s, 90s, and into the early 00s there was this “hacker culture” which was centered on the idea that as long as we keep our wits about us we could use computers as a great equalizer. The common person was empowered. Any and all software would be distributed for free so anyone who couldn’t afford it could get it. Bill Gates was painted as a villain because he was overtly capitalistic. The corpos were kept in check by a diverse, rapidly evolving market and a ton of savvy users who knew what they wanted.
Giant corporations pretty much caught on that they needed there to be fewer tech savvy people who could get one over on them. When politicians needed to ask experts what to include in school curriculums, guess who had lobbyists ready to go? Microsoft and Apple. Eventually Google too.
And now that there are fewer tech savvy people? Everything got shittier. Shinier, faster, dumber, more locked down and shittier. And the enshittification is just going to accelerate until people straight up reject it, then it’ll pause for 6 months to a year and start up again.
Just that they’re no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.
Basically, they’re “boring text detectors” more than anything else.
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
That’s not how it works, sorry.
Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.
IMO, if you want the beast deals right now on a 12+ TB HDD, you should use serverpartdeals.com instead. I’ve got 2 manufacturer recertified 14 TB enterprise-grade drives from them and it was way cheaper than buying any 14 TB external drive.
Agreed. It’s also worth noting that '97 is basically an uncensored version of the OG, so like in the old show Wolverine never gets to knife anyone who bleeds with his claws. In '97 he can
I know the movies have been in a slump, but X-Men '97 has been like a 9.5/10
They’re still mounted individually, so you do RAID5 or ZRAID on them, same as if they were internal. You can potentially be bandwidth-limited since USB 3.0 has a 5 Gbps speed limit, but realistically only for reads and you’re still fine in terms of overall performance since they’re all spinning disks anyhow and 5Gbps is fast enough for any media server/NAS unless you’ve got a 10-gig LAN/internet connection and feel the compulsive need to saturate it.
I’d recommend a DAS setup (just a 4-bay USB 3.0 or better hard drive enclosure) with a server with any modern Intel CPU in it.
Dunno. My point was just that it’s an easy change, so presumably if you care you could figure it out pretty simply
The transcodes folder exists by default and the cleanup defaults to daily, I think. You’d only need to mess with the configuration for transcode caching if you want them to stick around for longer
I use QSV hardware acceleration with Low Power h264 and HEVC encoders enabled, plus VPP tonemapping on Jellyfin. I have Prefer OS Native DXVA or VA-API decoders checked (apparently this is needed for VPP tonemapping) and also Enable Tone mapping checked so it can fallback onto OpenCL if VPP doesn’t work. Thread count set to Auto. Preset is set to medium. h265 encoding CRF of 25, h264 encoding CRF of 23 and I have Throttle transcodes enabled, which seems to increase the number of videos I can have transcoding at once due to not transcoding more than necessary whenever a player has enough buffer for the moment.
This is on the latest Jellyfin linuxserver.io container with the OpenCL-Intel docker mod.
Also, not sure if this is a factor but I’ve got 16 GB of single channel RAM on it and I use a USB-mounted SSD for my cache and transcode folders. In the past I ran into bandwidth issues by having my transcodes be on the same drive as my media.
I get 4K HDR to 4K SDR tonemapping plus 7.1 Dolby TrueHD to 2.0 AAC transcoding at 70-75fps with my setup.
Transcoding of 4K down to lower resolutions is even faster. 4K HDR to 480p SDR runs at 191fps.
I took a look at the benchmark script those results are from and compared them to the ffmpeg commands auto-generated by my Jellyfin server for actual transcodes.
Here’s how the command to transcode a 4K HDR 10 bit HEVC with 7.1 AAC audio to 4K SDR h264 with 2.0 AAC audio looks on my machine:
/usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -init_hw_device vaapi=va:,kernel_driver=i915,driver=iHD -init_hw_device qsv=qs@va -filter_hw_device qs -hwaccel vaapi -hwaccel_output_format vaapi -autorotate 0 -i file:"/Films/Man of Steel (2013)/Man of Steel (2013) Bluray-2160p Proper.mkv" -autoscale 0 -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 h264_qsv -low_power 1 -preset medium -look_ahead 0 -b:v 7616000 -maxrate 7616000 -bufsize 15232000 -g:v:0 72 -keyint_min:v:0 72 -vf "setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale_vaapi=format=nv12:extra_hw_frames=24,hwmap=derive_device=qsv,format=qsv" -codec:a:0 libfdk_aac -ac 2 -ab 384000 -ar 48000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 587 -hls_segment_filename "/transcodes/53a1ea6d1a7a34b888e73230f9ff04e2%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/transcodes/53a1ea6d1a7a34b888e73230f9ff04e2.m3u8"
And here’s the command that benchmark runs for 10-bit HEVC:
/usr/lib/jellyfin-ffmpeg/ffmpeg -y -hide_banner -benchmark -report -c:v hevc_qsv -i /config/ribblehead_4k_hevc_10bit.mp4 -c:a copy -c:v hevc_qsv -preset fast -global_quality 18 -look_ahead 1 -f null - 2>/dev/null
So I’m gonna go out on a limb and say there’s a major difference in configuration between the two. Setting global_quality to 18 is kind of absurdly high quality for hardware h265 encoding. You can easily get away with 28 for “good enough”. My setting of 25 for the CRF of h265 encoding is already edging into placebo territory for most videos. That’s all without considering the impact of low power mode, extra_hw_frames, etc.
You can easily do smooth 4K HDR transcoding with any modern Intel CPU with integrated graphics.
I have an Intel N100 and it can probably handle 2-3 4K HDR transcodes at once. Definitely more if they’re being transcoded down to lower resolutions. Encoding is the most intensive part of the process.
I use unpackerr combined with sonarr/radarr and it definitely covers 1 and 2