I have a P400 in my storage server which currently also runs some media containers like Plex, sonarr-sma, radarr-sma, Jellyfin, exploring Immich, etc. I have the GPU surfaced via docker and added it to each of the containers that needed access to the GPU for hardware acceleration needs. Is it possible to be able to leverage the Nvidia gpu container remotely (over the lan) without having the containers access it (pseudo) directly? I want to move the media handling containers to a Turing Pi 2 and keep just the GPU access on the storage server.
Not sure about docker support, but there is a gpu-over-ip implementation that supports Linux here: https://github.com/Juice-Labs/Juice-Labs
here is a super user post about pcie virtualization, and it involves writing custom drivers.
Off the top of my head, a similar setup with transcoding comes to mind. In this case I used a shared volume mount between the media server and the transcoding server, and ssh to run ffmpeg on the remote server.
I think an easier setup would be to proxy app calls that use the gpu through ssh to your gpu container, then write the output to a volume that the non gpu host can read from.
If you’re looking for transcoding, check out rffmpeg