Sending single split GPU to VM?

abishur

Dabbler
Joined
Jun 28, 2022
Messages
26
I have installed a Tesla M40 GPU into my TrueNAS Scale 22.02.3 Dell r720xd box.

When I disable isolation on it and go to share it in my apps I can see both GPU 0 and GPU 1 on the device.

However, when I isolate it I only see a single PCI device for my GPU when I do a lspci.

And even after isolating the Tesla M40, in my apps I see it is still allocating 0 nvidia.com/gpu GPU to my apps.

According to the isolated GPU area I have a second (builtin) GPU A Matrox Electronics Systems ltd. G200eR2

Is there a way to send just one of my GPUs to a VM? You would think so since TrueNAS see both GPUs when it's not isolated, right?
 

angst911

Dabbler
Joined
Sep 11, 2015
Messages
12
I have installed a Tesla M40 GPU into my TrueNAS Scale 22.02.3 Dell r720xd box.

When I disable isolation on it and go to share it in my apps I can see both GPU 0 and GPU 1 on the device.

However, when I isolate it I only see a single PCI device for my GPU when I do a lspci.

And even after isolating the Tesla M40, in my apps I see it is still allocating 0 nvidia.com/gpu GPU to my apps.

According to the isolated GPU area I have a second (builtin) GPU A Matrox Electronics Systems ltd. G200eR2

Is there a way to send just one of my GPUs to a VM? You would think so since TrueNAS see both GPUs when it's not isolated, right?
Are you using the built-in drivers, or did you install the virtual GPU driver? I don't think thee built-in GPU driver supports splitting the GPU (I'm actually pretty sure of this)

In the container configuration, I think you are mis-interpreting the GUI, it's not showing how many GPU's you have, but how many to present to the container. "0" or "1". Not "Device 0" and "Device 1"

If you installed the virtual GPU drivers, then you've gone down a whole new place of unsupported territory, been there, done that.... It's ugly. and the virtual GPU driver at the host level no longer supports display/graphics mode, so then you can't use the plex container like you want, you'd have to do a plex VM vs. the App/Container.

I'm down a different crazy path, I have two M40's in my system, and if I isolate one of them, they both disappear to the host.
 

abishur

Dabbler
Joined
Jun 28, 2022
Messages
26
Are you using the built-in drivers, or did you install the virtual GPU driver? I don't think thee built-in GPU driver supports splitting the GPU (I'm actually pretty sure of this)

In the container configuration, I think you are mis-interpreting the GUI, it's not showing how many GPU's you have, but how many to present to the container. "0" or "1". Not "Device 0" and "Device 1"

If you installed the virtual GPU drivers, then you've gone down a whole new place of unsupported territory, been there, done that.... It's ugly. and the virtual GPU driver at the host level no longer supports display/graphics mode, so then you can't use the plex container like you want, you'd have to do a plex VM vs. the App/Container.

I'm down a different crazy path, I have two M40's in my system, and if I isolate one of them, they both disappear to the host.
I would never had thought in a thousand years that it was saying how many GPUs to present to the container ‍♂️. It definitely left me wondering how I could tell it not to send ANY GPUs to the container, so that's good to know.

I am using the built-in drivers, it's good to know that I couldn't send one gpu to a VM and one to a container, but at this point I was planning on adding a second very low powered GPU for use with plex and using the GPU to dedicate it to two VMs.

Do you have any resources about setting up the virtual GPU drivers that I could use to get set up? I've got the GPU working within the VM by dedicating the entire card to the VM and have used it to play a couple games off steam. Nothing too amazing mind you, but enough to see that it's actually using the video card as a proof of concept and want to take the next step in using the virtual drivers.
 

angst911

Dabbler
Joined
Sep 11, 2015
Messages
12
Virtual GPU will put you into massively unsupported territory, and remember VGPU is a licensed feature. There are way around the VGPU licensing that the mods here probably won't appreciate, but you can adapt them from certain youtube videos for doing it on proxmox. Remember that if you do enable virtual GPU, then your plex container won't be able to do HW transcoding, you would have to move it to a VM. Also, the TrueNAS SCALE interface doesn't currently provide a mechanism for allocating MDEV's (aka virtual GPUs) to your VM's, so you'll have to edit the VM configurations by hand and risk losing the changes any time the middleware decides to re-write the VM configuration.

Installing Virtual GPU
Pre-Reqs:
  1. NVIDIA Business account with access to VGPU driver
  2. Downloaded VGPU driver
  3. system backup, you will probably break things
General steps -- I can provide details if you want them, but as I said earlier, it's massively unsupported territory. My current plan until TrueNAS SCALE properly supports MDEVs and such is to setup a Proxmox VM and passthrough the GPU to that and run the VGPU there with nested VMs. I'm just trying to figure out how to handle the GPU needs for containers in TrueNAS SCALE.
  1. Enable APT
  2. download VGPU licensing and drivers to system
  3. configure modules to load and manually blacklist nouveau
  4. remove truenas pre-installed nvidia-dkms drivers and related packages
  5. configure licensing
  6. install NVIDIA provided vGPU driver
  7. configure MDEV's and profiles
 

abishur

Dabbler
Joined
Jun 28, 2022
Messages
26
General steps -- I can provide details if you want them, but as I said earlier, it's massively unsupported territory. My current plan until TrueNAS SCALE properly supports MDEVs and such is to setup a Proxmox VM and passthrough the GPU to that and run the VGPU there with nested VMs. I'm just trying to figure out how to handle the GPU needs for containers in TrueNAS SCALE.

Thanks for the details! I think you might be on the right track with the proxmox VM. I've been considering adding a second server anyways and installing proxmox on it. You know, something more designed to make use of GPUs than my r720xd is, like a 1U chasis that I could put a couple GPUs to act as remote desktops for thin clients. I might just go ahead and pull the trigger on that now.
 

angst911

Dabbler
Joined
Sep 11, 2015
Messages
12
I think there are external PCIe riser chassis that can be used, but I'm no expert in that space. I went with a 4/5U style full-tower chassis from supper micro instead of rackmount for a reason. I also don't want a second server making noise.
 
Top