TrueNAS Scale 22.02.2: hdd pass-through issues

MarvinFS

Cadet
Joined
Jun 22, 2022
Messages
4
Colleagues,

Could somebody please clarify? I'm using latest TrueNAS-SCALE-22.02.2
now, have successfully passed nvme drives via GUI, but having issues with HDDs, there are no options in gui, I've tried with CLI as such:

Code:
root@truenas[/]# virsh attach-disk 1_homeserver --persistent /dev/disk/by-id/ata-ST4000VX007-2DT166_ZGY6GRYG vde 
Disk attached successfully


root@truenas[/]# virsh attach-disk 1_homeserver --persistent /dev/disk/by-id/ata-ST6000NM021A-2R7101_WSE02CZS vdg
Disk attached successfully


root@truenas[/]# virsh domblklist 1_homeserver                                                                   
 Target   Source
------------------------------------------------------------
 vde      /dev/disk/by-id/ata-ST4000VX007-2DT166_ZGY6GRYG
 vdg      /dev/disk/by-id/ata-ST6000NM021A-2R7101_WSE02CZS

I also verify that it is written correctly via virsh edit _VMNAME_ , but when I start the server - disks gone, probably overwritten by some system processes....

Code:
root@truenas[/]# virsh domblklist 1_homeserver
 Target   Source
------------------


What am I doing wrong here? I have tried to google, but nothing, here in this forum only one thread with same command for temp attaching drives during run-time...

Regards,
MarvinFS.
 

MarvinFS

Cadet
Joined
Jun 22, 2022
Messages
4
I have found a thread where a person mentioning that "SCALE stores these in the configuration database, not an XML or JSON file somewhere."
That is fine, but how on earth I would attach physical HDD to a VM in that case? No such options exposed in GUI, except ZFS pools.
I mean I have tried absolutely everything from Hyper-V to proxmox to unraid and VMware and never had issues with such a simple and common task...

I have found VM's xml config in /etc/libvirt/qemu/ but apparently it also being overwritten on VM start as changes don't persist, but the file is still there... I'm very much confused....

tried to do that after changes

Code:
root@truenas[/etc/libvirt/qemu]# virsh define /etc/libvirt/qemu/1_homeserver.xml                                                 
Domain '1_homeserver' defined from /etc/libvirt/qemu/1_homeserver.xml


no luck

Regards,
MarvinFS
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Using virsh isn't supported and is actively fought against by the config db and middleware (as is editing config files when they exist).

Either log a feature request or wait for the next releases for more configuration options to be made available via the GUI.

If you're super-impatient to do what you're attempting, you'll need to either switch to plain debian or proxmox in the meantime.
 

MarvinFS

Cadet
Joined
Jun 22, 2022
Messages
4
Either log a feature request or wait for the next releases for more configuration options to be made available via the GUI.
It is logged in Jira already for quite a while I also voted there, but it is not even considered as far as I can tell. Can you please enlighten me, may be it is logged somewhere else, like in a roadmap or something, or betas are avail... (haven't found)

in a meantime I'm successfully attaching drives during run-time with virsh attach-disk but doing so with every server restart is absolute pain. (maybe there us some hidden gotchas regarding custom script or commands which could run after VM start, so I will be able still to attach drives autonomously..)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
No special place or list... I suspect focus is on delivering the advertised features of Bluefin right now (like clustering of apps), so additional "advanced" virtualization features will probably only come after that.

You'll just have to live through the pain of being an "early adopter" for a bit.
 

MarvinFS

Cadet
Joined
Jun 22, 2022
Messages
4
You'll just have to live through the pain of being an "early adopter" for a bit.
That I'm very much acquainted with... I'm from Russia.. :(

Irony aside, thank you for the answer, will be monitoring situation then... dropped back to unraid until then... it works there out of the box... and yeah hardware passthrough for drives is very rarely the case... I just don't want to make changes permanent while I'm on a crossroads...

Regards,
MarvinFS
 
Top