Transcoding in TrueNAS Scale via Jellyfin Docker

buswedg

Explorer
Joined
Aug 17, 2022
Messages
69
Curious -- I'm have a setup on TrueNAS Scale where I have Portainer running on a docker-compose app (via TrueCharts) and a container in Portainer with Jellyfin.

Everything works fine, but I'm now looking at exposing my Nvidia GPU to the Jellyfin container in order to get transcoding setup.

running nvidia-smi on the host is showing my GPU, no problem. But I can't seem to get a compose config which exposes the GPU to Portainer, so I can then expose to Jellyfin.

I've tried the below compose, but the TrueCharts compose app simply wont spin up. I've also tried allocating the GPU via the WebUI for the TrueCharts compose app without the nvidia env vars and 'deploy' config below. And that also doesn't seem to work.

Anyone got something like this running before?

Code:
---
version: '3.8'
services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin-nvidia
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=US/Central
      - JELLYFIN_PublishedServerUrl=${PUBLISHED_SERVER}
      - NVIDIA_DRIVER_CAPABILITIES=all
      - NVIDIA_VISIBLE_DEVICES=all
    volumes:
      - ${DATA_PATH}/jellyfin/library:/config
    ports:
      - 8096:8096
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
 

buswedg

Explorer
Joined
Aug 17, 2022
Messages
69
Sorry, my bad. That is the compose for jellyfin. The below is my portainer compose which I'm having trouble spinning up via the TC app.

Code:
version: '3.8'
services:
  portainer:
    image: portainer/portainer-ce:latest
    container_name: portainer-nvidia
    environment:
      - NVIDIA_DRIVER_CAPABILITIES=all
      - NVIDIA_VISIBLE_DEVICES=all
    security_opt:
      - no-new-privileges:true
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /mnt/system-pool/system/media-server/data/portainer:/data
    ports:
      - 9443:9443
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
 

buswedg

Explorer
Joined
Aug 17, 2022
Messages
69
Still having trouble with this. I've even tried creating a debain 11 based VM with GPU passthrough, and running jellyfin via docker through that. I must admit, I've been able to take a slight step forward. But I still can't get encoding to work.

I seem to have the same issue as the OP here, who ultimately corrected the issue by upgrading ESXi. Which makes me think TureNAS might be causing an issue here.

Anyway, I've dropped logs and details below. Let me know if anyone has been able to overcome this.


------------------------------------------------
VM: Debian 11.4 x86_64 (installed without a desktop environment)
CPU: 8700k
GPU: Nvidia 1080
TrueNAS: SCALE-22.02.3


------------------------------------------------
how I installed nvidia drivers and the container toolkit on the VM:

Code:
sudo add-apt-repository contrib
sudo add-apt-repository non-free

sudo apt-get update && sudo apt-get upgrade

# if you have a CPU with 64bit op-mode per 'lscpu | grep CPU':
sudo apt-get install linux-headers-amd64 -y

sudo apt-get install firmware-misc-nonfree -y
sudo apt-get install nvidia-driver -y

sudo reboot

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
    && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
    && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
 
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install nvidia-container-toolkit -y

sudo systemctl restart docker



------------------------------------------------
docker-compose for jellyfin:

Code:
---
version: '3.8'
services:
  jellyfin:
    image: jellyfin/jellyfin:latest
    container_name: jellyfin-nvidia
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=US/Central
      - JELLYFIN_PublishedServerUrl=${PUBLISHED_SERVER}
      - NVIDIA_DRIVER_CAPABILITIES=all
      - NVIDIA_VISIBLE_DEVICES=all
    user: 1000:1000
    volumes:
      - ${DATA_PATH}/jellyfin:/config
    ports:
      - 8096:8096
      - 8920:8920 #optional
      - 7359:7359/udp #optional
      - 1900:1900/udp #optional
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]




------------------------------------------------
Jellyfin transcoding settings:

per my GPU: https://en.wikipedia.org/wiki/Nvidia_NVDEC

Enable hardware decoding for:
H264
HEVC
MPEG2
VC1
VP8

Enable enhanced NVDEC decoder (checked)

Enable hardware encoding (checked) <-- I can avoid the ffmpeg error noted (no cuda support) below if I uncheck this
Allow encoding in HEVC format (checked)


------------------------------------------------
nvidia-smi on the VM, which also matches the nvidia-smi output on the container:

Code:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:00:08.0 Off |                  N/A |
| 32%   30C    P8    14W / 180W |      1MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+


+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+



------------------------------------------------
transcode error log:

Code:
ffmpeg version 5.1-Jellyfin Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-libs=-lfftw3f --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto --enable-gpl --enable-version3 --enable-static --enable-gmp --enable-gnutls --enable-chromaprint --enable-libdrm --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
[AVHWDeviceContext @ 0x555d5df0be00] cu->cuInit(0) failed -> CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Device creation failed: -542398533.
Failed to set value 'cuda=cu:0' for option 'init_hw_device': Generic error in an external library
Error parsing global options: Generic error in an external library
 
Last edited:

buswedg

Explorer
Joined
Aug 17, 2022
Messages
69
And this is the output from '/usr/lib/jellyfin-ffmpeg/ffmpeg -v debug -init_hw_device cuda' within the container:

---------------------------------------------
Code:
ffmpeg version 5.1-Jellyfin Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-libs=-lfftw3f --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto --enable-gpl --enable-version3 --enable-static --enable-gmp --enable-gnutls --enable-chromaprint --enable-libdrm --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
Splitting the commandline.
Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
Reading option '-init_hw_device' ... matched as option 'init_hw_device' (initialise hardware device) with argument 'cuda'.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option v (set logging level) with argument debug.
Applying option init_hw_device (initialise hardware device) with argument cuda.
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded lib: libcuda.so.1
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuInit
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDeviceGetCount
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDeviceGet
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDeviceGetAttribute
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDeviceGetName
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDeviceComputeCapability
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuCtxCreate_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuCtxSetLimit
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuCtxPushCurrent_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuCtxPopCurrent_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuCtxDestroy_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemAlloc_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemAllocPitch_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemAllocManaged
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemsetD8Async
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemFree_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpy
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyAsync
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpy2D_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpy2DAsync_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyHtoD_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyHtoDAsync_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyDtoH_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyDtoHAsync_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyDtoD_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMemcpyDtoDAsync_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGetErrorName
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGetErrorString
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuCtxGetDevice
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDevicePrimaryCtxRetain
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDevicePrimaryCtxRelease
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDevicePrimaryCtxSetFlags
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDevicePrimaryCtxGetState
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDevicePrimaryCtxReset
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuStreamCreate
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuStreamQuery
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuStreamSynchronize
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuStreamDestroy_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuStreamAddCallback
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEventCreate
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEventDestroy_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEventSynchronize
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEventQuery
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEventRecord
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuLaunchKernel
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuLinkCreate
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuLinkAddData
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuLinkComplete
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuLinkDestroy
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuModuleLoadData
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuModuleUnload
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuModuleGetFunction
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuModuleGetGlobal
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuTexObjectCreate
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuTexObjectDestroy
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGLGetDevices_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGraphicsGLRegisterImage
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGraphicsUnregisterResource
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGraphicsMapResources
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGraphicsUnmapResources
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuGraphicsSubResourceGetMappedArray
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDeviceGetUuid
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuImportExternalMemory
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDestroyExternalMemory
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuExternalMemoryGetMappedBuffer
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuExternalMemoryGetMappedMipmappedArray
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMipmappedArrayGetLevel
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuMipmappedArrayDestroy
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuImportExternalSemaphore
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuDestroyExternalSemaphore
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuSignalExternalSemaphoresAsync
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuWaitExternalSemaphoresAsync
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuArray3DCreate_v2
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuArrayDestroy
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEGLStreamProducerConnect
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEGLStreamProducerDisconnect
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEGLStreamConsumerDisconnect
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEGLStreamProducerPresentFrame
[AVHWDeviceContext @ 0x55cd7bd76540] Loaded sym: cuEGLStreamProducerReturnFrame
[AVHWDeviceContext @ 0x55cd7bd76540] cu->cuCtxCreate(&hwctx->cuda_ctx, desired_flags, hwctx->internal->cuda_device) failed -> CUDA_ERROR_NOT_SUPPORTED: operation not supported
Device creation failed: -542398533.
Failed to set value 'cuda' for option 'init_hw_device': Generic error in an external library
Error parsing global options: Generic error in an external library
 
Last edited:

buswedg

Explorer
Joined
Aug 17, 2022
Messages
69
Just took a look at this again this morning, and I'm starting to feel like it might be a driver compatibility issue. I'm curious if anyone knows if the 'compute compatibility' version noted in the link below, must match driver versions (particularly cuda driver versions) in a certain way? I'll probably start a thread on the nvidia developer forums later today if no one here knows.

https://developer.nvidia.com/cuda-gpus
 

solitarius

Cadet
Joined
Jun 17, 2022
Messages
9
I am also stuck at having the GPU in the VM to be used by my Plex docker container. I have not achieved it for now but here is some things you could try.

  1. Run your container with the nvidia runtime (install instructions can be found here https://github.com/NVIDIA/nvidia-docker)
  2. I found out a thread from someone who tried to enable plex transcoding and described his process pretty nicely (https://forum.openmediavault.org/in...nscoding-on-omv-5-in-a-plex-docker-container/)
    The most interesting part is the step 6 (some modifications) where he install the nvidia-container-runtime to allow usage of NVIDIA_ env variables in an unprivileged container
 

buswedg

Explorer
Joined
Aug 17, 2022
Messages
69
I am also stuck at having the GPU in the VM to be used by my Plex docker container. I have not achieved it for now but here is some things you could try.

  1. Run your container with the nvidia runtime (install instructions can be found here https://github.com/NVIDIA/nvidia-docker)
  2. I found out a thread from someone who tried to enable plex transcoding and described his process pretty nicely (https://forum.openmediavault.org/in...nscoding-on-omv-5-in-a-plex-docker-container/)
    The most interesting part is the step 6 (some modifications) where he install the nvidia-container-runtime to allow usage of NVIDIA_ env variables in an unprivileged container
I'm not 100% sure, but I believe the 'nvidia-container-toolkit' package which I installed (per the above) contains nvidia-docker2, nvidia-container-runtime, and the necessary libs for transcoding. But at this point, I'll try anything.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
All I can say is that I run an Nvidia Card (P2000) under Scale, with a Plex container and it works - so the concept should be good
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Sorry, should have been clearer.
Plex Container - Truecharts rather than official
 

soleous

Dabbler
Joined
Apr 14, 2021
Messages
30
I know this is an old thread, but I was just searching the forum on how to get Nvidia container runtime working on Scale, so I can move from a VM to a Kubernetes application.

I had similar issues with getting my Nvidia GPU working under a VM with the same error "no CUDA-capable device is detected". Long story short, my problem was the VM configuration for the CPU mode. The default was set to custom and no CPU Model. As soon as I changed the mode to Host Model or Host Passthrough, it started to work.

Hope it helps anyone with similar issues.
 
Top