@flatline69 the process should have been smooth. Can you please confirm your TrueNAS Scale version ? If you do GPU passthrough via UI and then remove the GPU from isolation so it can be consumed by the host, the process should be smooth/simple. If you are running an older version of TrueNAS Scale, can you please try latest nightlies to see if that helps improve the situation ?
I am using TrueNAS-SCALE-21.04-ALPHA.1
I circumvented the docker-middleware system (not using UI) so that I could get iptables working (inet access) using a link that I found in the forums that creates a boot.sh etc, here:
https://gist.github.com/Jip-Hop/af3b7a770dd483b07ac093c3b205323f
Not going to lie -- the whole kubernetes/helm stuff is foreign to me while docker-compose is not and I have a large app-stack deployed and the UI is very limiting to me last time I tried (pre-21.04) and I just need it to work with minimal hassle (WAF-related reasons.)
Everything was fine until I did the following (from dev notes):
midclt call system.advanced.update '{"isolated_gpu_pci_ids": ["0000:af:00.0"]}'
Which broke Emby transcoding for me (up until this point; I was able to get it to see the GPU.) Then I use the "cli" command to undo the above and now I get errors trying to init CUDA.
Also when I installed the above packages via apt; there were errors trying to update initramfs but it did boot afterwards.
Looks like a reinstall in my future but it's really my own fault :(
NOTE: I don't think I was very clear. After installing the above packages, Emby could see the GPU and transcoding was available but kept using software decoding instead of hardware which led me to the dev-notes option of excluding the GPU I mentioned above thinking that somehow it was related (nvidia-smi worked both in host and in the container up until then.)