Plex for TrueNAS Scale - Unable to get hardware transcoding functional via Intel iGPU / QuickSync

Joined
Jan 8, 2024
Messages
21
Thank you everyone in advance for any insight or help you may be able to offer!

I have TrueNAS Scale version 23.10.1 running via a VM. I have successfully passed through the iGPU to TrueNAS, and it reports as a dedicated device within the system as device PCI Device 06:11:0 - Display Controller.

I've configured, deployed, and have successfully integrated Plex into my TrueNAS Scale system. Everything is working as it should, but for hardware Transcoding via the iGPU on my Intel i5-12600k. I am using PlexPass and it is configured successfully.

First, I was unsure if this was to go into the virtualization forum or this one, but I went here as this is specific to an application, (Plex) and from everything I can observe, the fact that TrueNAS sees everything via dedicated IOMMU - and TrueNAS reports this as a dedicated device. I'm able to do playback of all my media, but Plex never switches to HW transcoding no matter what I do for the configuration.

I do have the GPU available within the TrueNAS Plex app configuration, and I have selected "Allocate 1" as per the guides that I have seen. I have also attempted "Allocate 5" or various numbers to try and force this to accept HW encoding.

In addition, Plex itself does see the GPU as the Hardware transcoding device. I have tried both the "Auto" option and selecting the iGPU as a dedicated device within the Plex configuration, which it lists as "AlderLake-S GT1."

The "Enable Host Path for Transcode" is also enabled.

But CPU only encoding persists.

My system is as follows:
Motherboard: ASRock B660M Pro RS Intel B660 (BIOS-12.03 9/12/2023)
CPU: Intel i5-12600K (12th Gen) (4-Core Allocated)
RAM: Crucial Pro RAM 64GB DDR4 3200 (2x32GB) (16GB Allocated)
Boot Storage: (2) Samsung SSD 980 PRO 1TB - ZFS RAID 1
HBA: LSI Broadcom SAS 9300-8i 8-port
HBA Storage: Seagate IronWolf 8TB NAS 7200 RPM 256MB Cache (Mirrored)
Cache Drive: Kingston 256GB SATA SSD
Primary GPU: Gigabyte GeForce GTX 1650 OC 4G
Passthrough GPU: Intel iGPU AlterLake-S GT1
Hypervisor: Proxmox 8.1.3
TrueNAS Scale: 23.10.1

Raw passthrough via a dedicated IOMMU group is configured:

wqY0jHx.png


The Plex dashboard shows I am unable to get HW Transcoding, and I can confirm that it isn't working because my CPU goes to the moon:

jsEAvCG.png


The shell shows that I have successfully passed through the iGPU to TrueNAS, and it reports as a dedicated device within the system as device 06:11:0 Display Controller:

ibTAiyR.png


I have Plex fully up and running version 1.32.8.7639 with Chart 1.7.59 via the official Apps page:


mcRHKO8.png


The "Allocate 1" for the iGPU is configured for the Application. I have also attempted "Allocate 5" or various numbers to try and force this to accept HW encoding:

LwokVUh.png


This is the latest version of Plex, as determined by the settings within the Plex application:

NDEsJ2Q.png


I've tried both "Auto" and "AlderLake-S GT1" within Plex to get it to accept the QuickSync GPU as the transcoding device:

eUAW4X7.png


At this point, I've run out of ideas. I've rebooted the server, the app, my devices, I've re-installed Plex, I've added and removed my PlexPass. I've been working on this for at least three days now after work, but I can't seem to get this to take and I feel like I'm taking crazy pills. This is the first time I've done anything like this since my days of early messing within Linux back in 2004, so I'm super rusty.

Any help you can offer is greatly appreciated, and I'll do my best to provide any updates or information you can provide. I did see this thread having some issues, but this appears when trying to pass the GPU to a VM within the TrueNAS system, which is not what I want. Otherwise, I'm stumped at this point.
 

zstakacs

Cadet
Joined
Nov 15, 2022
Messages
3
Same issue here with nVidia A4000. I can select it, but can't use for HW transcoding
 
Joined
Jan 8, 2024
Messages
21
Same issue here with nVidia A4000. I can select it, but can't use for HW transcoding
I'm assuming you have (2) GPUs? From what I've read, and I could be mistaken, but you need both a GPU to run the system and a GPU for the transcoding. In my case I have a 1650 for primary use, and the intention was to use the QuickSync for transcoding because it is supposed to fly.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm assuming you have (2) GPUs? From what I've read, and I could be mistaken, but you need both a GPU to run the system and a GPU for the transcoding.

You only need two GPUs if you plan to use isolation to pass one to a VM (inside of TrueNAS) - using them for Apps doesn't require a second one.

ProxMox is the host OS for the TrueNAS VM; perhaps we're seeing a weird edge-case where it's trying to do things with the GPU still at the ProxMox level. Did you set up SR-IOV on ProxMox to ensure that it would be willing to pass a fully-capable (including QuickSync) GPU device to the TrueNAS VM?
 
Joined
Jan 8, 2024
Messages
21
You only need two GPUs if you plan to use isolation to pass one to a VM (inside of TrueNAS) - using them for Apps doesn't require a second one.

ProxMox is the host OS for the TrueNAS VM; perhaps we're seeing a weird edge-case where it's trying to do things with the GPU still at the ProxMox level. Did you set up SR-IOV on ProxMox to ensure that it would be willing to pass a fully-capable (including QuickSync) GPU device to the TrueNAS VM?

Correct, I did, both iommu and iommu=pt are enabled on the Proxmox host, and the kernel driver is reporting vfio-pci:

However, the more I look at this, the more I think that for some reason, despite a Blacklist being written for the i915, the Proxmox host is still activating it somehow:

X2sleki.png
 

zstakacs

Cadet
Joined
Nov 15, 2022
Messages
3
I'm assuming you have (2) GPUs? From what I've read, and I could be mistaken, but you need both a GPU to run the system and a GPU for the transcoding. In my case I have a 1650 for primary use, and the intention was to use the QuickSync for transcoding because it is supposed to fly.
I only have this one GPU in the machine (no monitor connected). This setup worked fine 1-2 weeks ago with the current RTX A4000, and for months with the previous P2000. TDARR still can use the card, and so far they have been able to use it in parallel. Unfortunately my processor does not support quicksync (Xeon E5-2697v3)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Correct, I did, both iommu and iommu=pt are enabled on the Proxmox host, and the kernel driver is reporting vfio-pci:

However, the more I look at this, the more I think that for some reason, despite a Blacklist being written for the i915, the Proxmox host is still activating it somehow:

This is ProxMox still trying to claim the i915 as it wants a GPU device for itself. Is the GTX1650 you mentioned being used by a ProxMox workload (isolated?) as well, or is it also doing work inside TrueNAS?

The solution might be enabling GPU SR-IOV - Intel's version of multiple virtual GPUs - on ProxMox itself, letting you split off the iGPU into multiple VFs (Virtual Functions) - you can then still satisfy ProxMox's request for a GPU, while passing a iGPU VF device to the TrueNAS VM.

See the (rather involved) guide here: https://www.michaelstinkerings.org/gpu-virtualization-with-intel-12th-gen-igpu-uhd-730/
 
Joined
Jan 8, 2024
Messages
21
This is ProxMox still trying to claim the i915 as it wants a GPU device for itself. Is the GTX1650 you mentioned being used by a ProxMox workload (isolated?) as well, or is it also doing work inside TrueNAS?

The solution might be enabling GPU SR-IOV - Intel's version of multiple virtual GPUs - on ProxMox itself, letting you split off the iGPU into multiple VFs (Virtual Functions) - you can then still satisfy ProxMox's request for a GPU, while passing a iGPU VF device to the TrueNAS VM.

See the (rather involved) guide here: https://www.michaelstinkerings.org/gpu-virtualization-with-intel-12th-gen-igpu-uhd-730/
No, the 1650 is simply connected and is not dedicated out to any device or isolated. I'd not taken any blacklist actions or done any kind of isolation of this device. Looks like the Debian code is also correctly using the 1650?

7LbzSQz.png
 
Joined
Jan 8, 2024
Messages
21
No, the 1650 is simply connected and is not dedicated out to any device or isolated. I'd not taken any blacklist actions or done any kind of isolation of this device. Looks like the Debian code is also correctly using the 1650?

View attachment 74397
Also of note, it does look like TrueNAS does see the device isolated and by itself - but I think if that wasn't the case I wouldn't be able to use the "Dedicate 1" setting I'd configured before?

QyfhAUe.png
 
Joined
Jan 8, 2024
Messages
21
Hmm. Do you have render devices in /dev/dri on your TrueNAS install?
Indeed! I did some more digging as well - based on my understanding, I'm 95% certain that I have bypassed the Hypervisor and given full bare-metal access of the iGPU to the VM.

This is the render note -

nZvzRxV.png


TrueNAS reports loading the i915 Module when runnning
Code:
lspci -v


VKsW04D.png


I can see the TrueNAS kernel load the drivers for the processor, including initializing the chip. Some of the errors I'm still not 100% on, but I don't think they are show-stoppers, especially since at the end it does initialize, and because I don't have a monitor attached.

WinGCGm.png


That also looks like the correct driver, from what I understand too. And yet, CPU rendering persists.

Vvr2968.png


Thanks for your help BTW - I'm really...baffled at this point. Everything I can see, especially the fact the TrueNAS loads all firmware tells me this should work?!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
because I don't have a monitor attached.

Shot in the dark - have you tried connecting one? It might be applicable here, but I have vague recollections of certain GPUs requiring a connected display device or dummy-plug that can fake the EDID info in order for some functionality to fire up. It may have been more related to actual renders or "cloud gaming" setups, but just to rule it out here.
 
Joined
Jan 8, 2024
Messages
21
Shot in the dark - have you tried connecting one? It might be applicable here, but I have vague recollections of certain GPUs requiring a connected display device or dummy-plug that can fake the EDID info in order for some functionality to fire up. It may have been more related to actual renders or "cloud gaming" setups, but just to rule it out here.
Just to confirm, connect the display to the pass-through TrueNAS HDMI (The integrated HDMI) - and not the GTX-1650 which serves as the primary GPU when doing BIOS maintenance?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Just to confirm, connect the display to the pass-through TrueNAS HDMI (The integrated HDMI) - and not the GTX-1650 which serves as the primary GPU when doing BIOS maintenance?
The iGPU HDMI output would be the one I'd suggest trying it with - although the GTX1650 being "primary" might also be messing with things somehow. But you're getting render devices, which suggests it is in fact enabling things.

My only Alder Lake systems at the moment don't have iGPUs right now so I'm kind of flying blind. I may have to find a 12100 or something.
 
Joined
Jan 8, 2024
Messages
21
The iGPU HDMI output would be the one I'd suggest trying it with - although the GTX1650 being "primary" might also be messing with things somehow. But you're getting render devices, which suggests it is in fact enabling things.

My only Alder Lake systems at the moment don't have iGPUs right now so I'm kind of flying blind. I may have to find a 12100 or something.
So I did attach a display with no change in behavior. To be clear, the GTX1650 is "Invisible" to TrueNAS because it isn't being passed at all. And if I run lspci within the TrueNAS shell, it doesn't even show it. I meant "Primary" as that's what I need to use at boot for editing the BIOS config (If I need access to it).

BGk8lWj.png


Like you said, the fact that it's doing the following:
  • Detecting Render
  • Kernel reporting as i915
  • Loading the FW
  • Initializing the i915 at boot
  • Reports the "Allocate 1" as the transcode
  • Shows the HW GT-1 as the transcoding device within Plex
At this point, my guess is a possible bug within either TrueNAS, Plex, or the Kernel itself. I'm not exactly sure which, or how to go about reporting it, and to whom, but is that a fair assessment to make at this stage? Or is there anything else I could be missing? I work in software, so I hate to use the "Bug" word without being 1,000% sure. But I'm out of ideas at this stage.
 
Joined
Jan 8, 2024
Messages
21
The iGPU HDMI output would be the one I'd suggest trying it with - although the GTX1650 being "primary" might also be messing with things somehow. But you're getting render devices, which suggests it is in fact enabling things.

My only Alder Lake systems at the moment don't have iGPUs right now so I'm kind of flying blind. I may have to find a 12100 or something.
So I went through the process of removing the iGPU and bare-metal passthough and re-did the entire sequence. My experience remains the same, I can see everything as in the screen-shots above but it never switches to Hardware encoding.

Is there anything I can do with your team or logs I can provide to try and find why it's not triggering? I'm still new to the TrueNAS ecosystem so I don't know if there is bug reporting, or if I should be trying to figure out how to best help others in my situation?

Like I said, at this point I'm pretty stumped with it showing as an available transcoder via /dev/dri - and with it appearing as "Allocate 1" - but am willing to do whatever I can to continue troubleshooting.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@AngrySailorWithBeer Probably the best option (aside from continuing to collaborate here with others who might have the hardware to reproduce it) is to submit a bug/feature request from the iX Jira and include a debug file (System -> Advanced -> Save Debug) - grab logs from inside the container too if you can.

If you're able to temporarily run SCALE bare-metal on your Proxmox host (on separate boot media) that will also help isolate that as a potential contributing factor.
 
Joined
Jan 8, 2024
Messages
21
@AngrySailorWithBeer Probably the best option (aside from continuing to collaborate here with others who might have the hardware to reproduce it) is to submit a bug/feature request from the iX Jira and include a debug file (System -> Advanced -> Save Debug) - grab logs from inside the container too if you can.

If you're able to temporarily run SCALE bare-metal on your Proxmox host (on separate boot media) that will also help isolate that as a potential contributing factor.
@HoneyBadger - So I was finally able to get Plex to activate GuC by forcing it. The kernel complains, but the system now at least does report GuC and submission acceptence:

Code:
admin@truenas[~]$ sudo dmesg | grep guc       
[sudo] password for admin:
[    0.000000] Command line: BOOT_IMAGE=/ROOT/23.10.1@/boot/vmlinuz-6.1.63-production+truenas root=ZFS=boot-pool/ROOT/23.10.1 ro libata.allow_tpm=1 amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 intel_iommu=on zfsforce=1 nvme_core.multipath=N i915.enable_guc=3
[    0.015917] Kernel command line: BOOT_IMAGE=/ROOT/23.10.1@/boot/vmlinuz-6.1.63-production+truenas root=ZFS=boot-pool/ROOT/23.10.1 ro libata.allow_tpm=1 amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 intel_iommu=on zfsforce=1 nvme_core.multipath=N i915.enable_guc=3
[    4.639191] Setting dangerous option enable_guc - tainting kernel
[    6.334845] i915 0000:06:11.0: [drm] GuC firmware i915/tgl_guc_70.bin version 70.5.1
admin@truenas[~]$ sudo dmesg | grep i915 
[    0.000000] Command line: BOOT_IMAGE=/ROOT/23.10.1@/boot/vmlinuz-6.1.63-production+truenas root=ZFS=boot-pool/ROOT/23.10.1 ro libata.allow_tpm=1 amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 intel_iommu=on zfsforce=1 nvme_core.multipath=N i915.enable_guc=3
[    0.015917] Kernel command line: BOOT_IMAGE=/ROOT/23.10.1@/boot/vmlinuz-6.1.63-production+truenas root=ZFS=boot-pool/ROOT/23.10.1 ro libata.allow_tpm=1 amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 intel_iommu=on zfsforce=1 nvme_core.multipath=N i915.enable_guc=3
[    4.640158] i915 0000:06:11.0: [drm] VT-d active for gfx access
[    4.640191] i915 0000:06:11.0: [drm] Using Transparent Hugepages
[    4.653740] i915 0000:06:11.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment)
[    4.653742] i915 0000:06:11.0: [drm] Failed to find VBIOS tables (VBT)
[    4.657345] i915 0000:06:11.0: [drm] Finished loading DMC firmware i915/adls_dmc_ver2_01.bin (v2.1)
[    6.227822] i915 0000:06:11.0: [drm] failed to retrieve link info, disabling eDP
[    6.334845] i915 0000:06:11.0: [drm] GuC firmware i915/tgl_guc_70.bin version 70.5.1
[    6.334858] i915 0000:06:11.0: [drm] HuC firmware i915/tgl_huc.bin version 7.9.3
[    6.339136] i915 0000:06:11.0: [drm] HuC authenticated
[    6.340763] i915 0000:06:11.0: [drm] GuC submission enabled
[    6.340767] i915 0000:06:11.0: [drm] GuC SLPC enabled
[    6.341848] i915 0000:06:11.0: [drm] GuC RC: enabled
[    6.343164] [drm] Initialized i915 1.6.0 20201103 for 0000:06:11.0 on minor 1
[    6.345276] i915 0000:06:11.0: [drm] Cannot find any crtc or sizes
[    6.345505] i915 0000:06:11.0: [drm] Cannot find any crtc or sizes


I did this by forcing guc=3 - sudo midclt call system.advanced.update '{"kernel_extra_options": "i915.enable_guc=3"}'
 
Last edited by a moderator:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@HoneyBadger - So I was finally able to get Plex to activate GuC by forcing it. The kernel complains, but the system now at least does report GuC and submission acceptence:

*snip*

I did this by forcing guc=3 - sudo midclt call system.advanced.update '{"kernel_extra_options": "i915.enable_guc=3"}'
(Block-coded your middle lines for readability)

Glad to hear that a solution was found, and thanks for reporting back with the detailed fix.

Question - you're running SCALE as a Proxmox VM, what CPU type did you assign it (kvm64, host, other) as I wonder if that will have an impact on the need for/ability to more widely apply this fix.
 
Joined
Jan 8, 2024
Messages
21
(Block-coded your middle lines for readability)

Glad to hear that a solution was found, and thanks for reporting back with the detailed fix.

Question - you're running SCALE as a Proxmox VM, what CPU type did you assign it (kvm64, host, other) as I wonder if that will have an impact on the need for/ability to more widely apply this fix.
Bah, it may have been the exaustion - but I'm not 100% yet. I've just made additional progress in that now Plex sees the card beyond the configuration menu.

Previously, Plex saw the card in the Settings menu of Plex itself if "Allocate1" was enabled within the TrueNAS Plex App Configuration menu. However, the Plex log never printed anything about activating the iGPU/Card, just that "One was configured but not found."

In doing this setting, two actions occurred:

1. The Plex Log now prints it finds the card but throws an error about not being able to initiate it.
2. Grep for i915 of dmesg now reports the i915 GPU GuC is active. Previously, it would report the state of GuC, and say that all firmware for both GuC, HuC, and the i915 were activated. There is now an additional print for not only the i915 initiating, but another line about GuC initiation.

So now I'm agonizingly close, but not yet there. Though now I think this may be a Plex bug.

And as for Type, the CPU is configured with (4) Cores and is an x86-64-v2-AES
 
Top