IOMMU Group 0 Passthrough

OneMoreThing

Cadet
Joined
Mar 7, 2022
Messages
6
Hi everyone,

I'm a first time TrueNAS user. I'm attempting to migrate from a closed NAS system to something with a little more flexibility. I was able to get some older equipment from friends who had upgraded and found a compatible CPU for cheap online which I snapped up. The specs are as follows:
  • ASRock x470 mITX
  • 32GB RAM
  • Ryzen 5600G
  • Nvidia Quadro P400
  • 256GB Samsung 970 NVME - boot
  • 250GB Crucial SSD - VMs
  • 3 x 3GB WD Red - Dataset Pool
The issue I am having is with the passthrough of the P400 to one of the VMs.
When I open System Configuration > Advanced > GPU Isolation I am able to select the P400 there with no issue. Then I go to the VM > Edit > GPUs and select the Isolated P400, I get the following error:
Code:
[EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group


If I open the shell and execute lspci -v I see that the VGA and Audio parts of the P400 are there and they are the only items listed in IOMMU group 0.

Code:
01:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P400] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: NVIDIA Corporation GP107GL [Quadro P400]
        Flags: bus master, fast devsel, latency 0, IRQ 11, IOMMU group 0
        Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
        Memory at b0000000 (64-bit, prefetchable) [size=256M]
        Memory at c0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at f000 
        Expansion ROM at 000c0000 [disabled] [size=128K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] Secondary PCI Express
        Kernel driver in use: vfio-pci
        Kernel modules: nouveau, nvidia_current_drm, nvidia_current

01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
        Subsystem: NVIDIA Corporation GP107GL High Definition Audio Controller
        Flags: bus master, fast devsel, latency 0, IRQ 10, IOMMU group 0
        Memory at fc080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel


If I look at the devices of the VM and select Add > Type: PCI Passthrough Device the P400 PCI device ID is not listed in the dropdown. I have tried with with SR-IOV enabled and disabled with no discernable difference.

As I said earlier, I am very new to this, but I have done some searching and have not found anything that has helped. I was able to passthrough the iGPU in the 5600G with no problem at all. It is likely that I don't know enough to get the correct keywords to find the answer to this problem so any guidance would be appreciated.
 
Last edited by a moderator:

M.F.

Dabbler
Joined
Mar 8, 2022
Messages
10
Hey there, I have the same problem and search for help too...
I bought a 1050 ti for a Windows VM, tried to add the card as a GPU in the settings of the already working, but shutdown, windows VM. Then I get the following message.

[EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group [EINVAL] attribute.pptdev: IOMMU support is required.
remove_circle_outline More info...

Fehler: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 175, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self, File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/service.py", line 907, in create rv = await self.middleware._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 148, in do_create data = await self.validate_device(data, update=False) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 463, in validate_device raise verrors middlewared.service_exception.ValidationErrors: [EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group [EINVAL] attribute.pptdev: IOMMU support is required.

How i can enable the GPU to pass through to the VM?
Best regards from germany!
 

Attachments

  • TrueNAS Scale GPU Error 02.JPG
    TrueNAS Scale GPU Error 02.JPG
    124.1 KB · Views: 593
  • TrueNAS Scale GPU Error 01.JPG
    TrueNAS Scale GPU Error 01.JPG
    98.8 KB · Views: 509

OneMoreThing

Cadet
Joined
Mar 7, 2022
Messages
6
I bought a 1050 ti for a Windows VM, tried to add the card as a GPU in the settings of the already working, but shutdown, windows VM. Then I get the following message.

[EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group [EINVAL] attribute.pptdev: IOMMU support is required.
remove_circle_outline More info...

Fehler: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 175, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self, File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/service.py", line 907, in create rv = await self.middleware._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 148, in do_create data = await self.validate_device(data, update=False) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 463, in validate_device raise verrors middlewared.service_exception.ValidationErrors: [EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group [EINVAL] attribute.pptdev: IOMMU support is required.

How i can enable the GPU to pass through to the VM?
Best regards from germany!
Hi M.F.

No one has tried to provide any assistance with my issue, but let me see if I can help you out with yours. I was looking through your screenshots and I do not see your GPU in an IOMMU Group. That is likely the issue. If you look at the flags section of your second screen shot there is no IOMMU Group listed after IRQ. Take a look at my second CODE block to see what I am referring to. You have to turn this on in the BIOS of your motherboard before you will be able to passthrough the GPU.

Good luck! I hope this helps!
 

M.F.

Dabbler
Joined
Mar 8, 2022
Messages
10
Hello OneMoreThing, Thank you for your answer, I can't find the setting. My motherboard is the ASUS p8z68v_pro, latest BIOS installed, you can find the manual in the attachment. Intel Virtualization is switched on and LucidLogix Virtu is also switched on, what else do I have to switch on to activate the IMMO group? Or do I have to activate the IMMO group manually in the command line? Can you tell me the commands for this? Best Regards and thank ypu for your help!
 

Attachments

  • E6696_P8Z68-V_PRO.pdf
    7.5 MB · Views: 2,049
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Not to mention that the Lucid Virtu garbage and related firmware customizations are not likely to be conducive to a pleasant PCI passthrough experience.
 

M.F.

Dabbler
Joined
Mar 8, 2022
Messages
10
Thanks a lot for your time, I was totally shure that the k version of the 3700 cant worse than the original one. Man that sucks.. Gladly I have a 3700 laying around her.
 

behzad

Dabbler
Joined
Feb 14, 2022
Messages
15
After i setup the Bios, the screen is stucked right here...
I still can access the guy and setup everything for the vm:
- isolate gpu
- vm settting --> insert gpu
but when it starts the vm i get a crash...

i just want to install windows in a vm with gpu passthorugh (gtx680 or gtx670 i ve both here.)
photo_2022-03-10_12-34-30.jpg
 

OneMoreThing

Cadet
Joined
Mar 7, 2022
Messages
6
Thanks a lot for your time, I was totally shure that the k version of the 3700 cant worse than the original one. Man that sucks.. Gladly I have a 3700 laying around her.
Well, did you get it working? You had quite a few people provide input on your issue.
 

M.F.

Dabbler
Joined
Mar 8, 2022
Messages
10
Well, did you get it working? You had quite a few people provide input on your issue.
Yes I got the IMMO thing working. But then I got this Problem described here: https://www.truenas.com/community/threads/issues-with-iommu-groups-for-vm-passtrough.96096/

My VM was not starting because the GPU shared the IMMO group with the HBO and CPU... The work around in the post above didn´t work for me. I give up. Unraid works just fine for VMs, so I will went for an unraid installation with a trunas scale VM instead.

But thanks a lot for your help! I don´t get why nobody with a higher skill level share information here in this post. In the unraid forum there are a lot more people that help you out if you have a problem. Unraid will add ZFS support nativ, so I am looking forward to that. Maybe I dont need the truenas vm then anymore.
 

OneMoreThing

Cadet
Joined
Mar 7, 2022
Messages
6
Glad to hear that you got something working. I will admit that working with TrueNAS has been frustrating, but I really don't feel like going back to Proxmox and rebuilding my VMs again.

Cheers!
 

OneMoreThing

Cadet
Joined
Mar 7, 2022
Messages
6
After i setup the Bios, the screen is stucked right here...
I still can access the guy and setup everything for the vm:
- isolate gpu
- vm settting --> insert gpu
but when it starts the vm i get a crash...

i just want to install windows in a vm with gpu passthorugh (gtx680 or gtx670 i ve both here.)
View attachment 53896
Wow. This is definitely above my skill level and I'm afraid to even take a guess at what the issue might be. I'm sorry that I couldn't help you. All I can suggest is starting a new thread and hope that you get more attention, and help with your issue than I did.

Best of luck!
 

behzad

Dabbler
Joined
Feb 14, 2022
Messages
15
SOLVED

omg i cant belive i forgot this. Be sure to tell the BIOS to only use the internal graphics....
 

xbgt85

Cadet
Joined
May 6, 2020
Messages
3
I was about to start a new thread but I believe I have the issue issue as OP. Here are my details below.


I'm attempting to passthrough a gpu to a windows 10 VM
  • ASRock b550 mITX
  • 32GB RAM
  • Ryzen 5700G
  • Nvidia A2000
  • 4 x 4TB Seagate for data
The issue I am having is with the passthrough of the A2000 to one of the VMs.
When I open System Configuration > Advanced > GPU Isolation I am able to select the A2000 there with no issue. Then I go to the VM > Edit > GPUs and select the Isolated A2000, I get the following error:

[EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group


If I open the shell and execute lspci -v I see that the VGA and Audio parts of the A2000 are there and they are the only items listed in IOMMU group 0.

Code:
01:00.0 VGA compatible controller: NVIDIA Corporation Device 2531 (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation device 151d
Flags: bus master, fast devsel, IRQ 255, IOMMU group 0
Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
Memory at b0000000 (64-bit, prefetchable) [size=256M]
Memory at c0000000 (64-bit, prefetchable) [size=32M]
I/O ports at f000 [disabled]
Expansion ROM at fc000000 [disabled] [size=512K][/SIZE]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Legacy Endpoint, MSI 00
Capabilities: [100] Virtual Channel
Capabilities: [250] Latency Tolerance Reporting
Capabilities: [258] L1 PM Substates
Capabilities: [128] Power Budgeting <?>
Capabilities: [420] Advanced Error Reporting
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Capabilities: [900] Secondary PCI Express
Capabilities: [bb0] Physical Resizable BAR
Capabilities: [c1c] physical Layer 16.0 GT/s <?>
Capabilities: [d00] Lane Margining at the Receiver <?>
Capabilities: [e00] Data Link Feature <?>
Kernel driver in use: vfio-pci
Kernel modules: nouveau, nvidia_current_drm, nvidia_current

01:00.1 Audio device: NVIDIA Corporation Device 228e (rev a1)
Subsystem: NVIDIA Corporation Device 151d
Flags: fast devsel, IRQ 255, IOMMU group 0
Memory at fc080000 (32-bit, non-prefetchable) [disabled] [size=16K][/SIZE]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [160] Data Link Feature <?>
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel

If I run nvidia-smi I get the following error:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA drive is installed and running.

If I look at the devices of the VM and select Add > Type: PCI Passthrough Device the Device 2531 device IDB list IS listed in the dropdown.

My first thought was that it was too new but I tried a p400 card and have the same error but the nvidia-smi command does work.
 
Last edited by a moderator:

xbgt85

Cadet
Joined
May 6, 2020
Messages
3
I was about to start a new thread but I believe I have the issue issue as OP. Here are my details below.


I'm attempting to passthrough a gpu to a windows 10 VM
  • ASRock b550 mITX
  • 32GB RAM
  • Ryzen 5700G
  • Nvidia A2000
  • 4 x 4TB Seagate for data
The issue I am having is with the passthrough of the A2000 to one of the VMs.
When I open System Configuration > Advanced > GPU Isolation I am able to select the A2000 there with no issue. Then I go to the VM > Edit > GPUs and select the Isolated A2000, I get the following error:

[EINVAL] attribute.pptdev: Not a valid choice. The PCI device is not available for passthru: Following errors were found with the device: Unable to determine iommu group


If I open the shell and execute lspci -v I see that the VGA and Audio parts of the A2000 are there and they are the only items listed in IOMMU group 0.

Code:
01:00.0 VGA compatible controller: NVIDIA Corporation Device 2531 (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation device 151d
Flags: bus master, fast devsel, IRQ 255, IOMMU group 0
Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
Memory at b0000000 (64-bit, prefetchable) [size=256M]
Memory at c0000000 (64-bit, prefetchable) [size=32M]
I/O ports at f000 [disabled]
Expansion ROM at fc000000 [disabled] [size=512K][/SIZE]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Legacy Endpoint, MSI 00
Capabilities: [100] Virtual Channel
Capabilities: [250] Latency Tolerance Reporting
Capabilities: [258] L1 PM Substates
Capabilities: [128] Power Budgeting <?>
Capabilities: [420] Advanced Error Reporting
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Capabilities: [900] Secondary PCI Express
Capabilities: [bb0] Physical Resizable BAR
Capabilities: [c1c] physical Layer 16.0 GT/s <?>
Capabilities: [d00] Lane Margining at the Receiver <?>
Capabilities: [e00] Data Link Feature <?>
Kernel driver in use: vfio-pci
Kernel modules: nouveau, nvidia_current_drm, nvidia_current

01:00.1 Audio device: NVIDIA Corporation Device 228e (rev a1)
Subsystem: NVIDIA Corporation Device 151d
Flags: fast devsel, IRQ 255, IOMMU group 0
Memory at fc080000 (32-bit, non-prefetchable) [disabled] [size=16K][/SIZE]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [160] Data Link Feature <?>
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel

If I run nvidia-smi I get the following error:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA drive is installed and running.

If I look at the devices of the VM and select Add > Type: PCI Passthrough Device the Device 2531 device IDB list IS listed in the dropdown.

My first thought was that it was too new but I tried a p400 card and have the same error but the nvidia-smi command does work.

I was able to swap the board (gigabyte) from my main rig and everything works fine. I would recommend not getting Asrock since that is the common denominator here.
 

OneMoreThing

Cadet
Joined
Mar 7, 2022
Messages
6
I was able to swap the board (gigabyte) from my main rig and everything works fine. I would recommend not getting Asrock since that is the common denominator here.
Wow! Thanks for that information. No one even suggested that it may have been a hardware problem for my issue, not that anyone actually suggested anything at all. until NOW! Much appreciated!

Guess I'll stay away from ASRock for future virtualization builds.
 

zfsuser9001

Cadet
Joined
Dec 28, 2022
Messages
9
Running into the same issue with a SuperMicro X10 board, the A2000 is there and nvidia-smi can talk to it but nothing seems to use it and it shows up as Device 2531 vs Nvidia RTX A2000. Maybe its a driver issue. Angelfish used 460.X and Bluefin uses 515.X.
 

zfsuser9001

Cadet
Joined
Dec 28, 2022
Messages
9
I think my Device issues stems from the development branch not having recently run update-pciids command as part of the environment. If I run that it gets recognized as A2000.
 
Top