21.02-ALPHA.1 virsh commands fail

Status
Not open for further replies.

whiskerz007

Dabbler
Joined
Feb 9, 2021
Messages
13
When attempting to passthrough hardware to a VM, I came across the following problem.

Code:
truenas# virsh nodedev-list
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused

truenas#


Running the following commands corrects the behavior.

Code:
systemctl unmask libvirtd.socket
systemctl restart libvirtd.service


Was libvirtd.socket intentionally masked?
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
The libvirtd systemd unit files are disabled by default, so I assume the libvirtd service is not enabled until a VM is created via the GUI.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Only just installed 21.02-ALPHA.1 to see what's happening with APPS not VMs. Default seems to be no libvirt/libvirtd services/sockets. If I put a non-root user in the libvirt group then "virsh nodedev-list" returns what you might expect, whereas root returns:

Code:
root@truenas:~# virsh nodedev-list
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

root@truenas:~#


Is that normal?
 

pinoli

Dabbler
Joined
Feb 20, 2021
Messages
34
I encountered the same behavior +1
 

nubian122

Cadet
Joined
Feb 4, 2021
Messages
4
Same for me. I have tried this on 2 servers running the latest 2/18 nightly and a Alpha.1. I am unable to pass through any PCI hardware.
 

arnaudf

Cadet
Joined
Feb 21, 2021
Messages
2
Hello,

Same behavior here, because libvirt socket in truenas scale is not at default location.
Workaround : create an alias in $HOME/.zshrc

alias virsh='virsh -c "qemu+unix:///system?socket=/run/truenas_libvirt/libvirt-sock" $1'
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Are you all trying to bypass the API and Web UI? Can you provide some context on what you are trying to achieve?

The WebUI and CLI go through the API middleware and simplify persisting all the information.
 

pinoli

Dabbler
Joined
Feb 20, 2021
Messages
34
I wanted to check several things:
  • if by creating a VM manually, it would eventually show up in the GUI
  • passthrough of the internal GPU to a windows VM
  • try to spin up a macOS VM by using OpenCore following this guide (I believe TrueNAS uses kvm)
  • general messing around with an alpha release
 

whiskerz007

Dabbler
Joined
Feb 9, 2021
Messages
13
@Kris Moore I appreciate the updated documentation. I would be interested in seeing the conversation that went on around moving the socket to a non-standard location. It seems like more work to maintain modifications to upstream packages.
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@whiskerz007 we do not want users to use CLI to configure VM's in any way as that increases the risk of breaking something because it's just not libvirt devices which are involved but the devices/vms stored in database as well and manual tweaking can result in an inconsistent state. So having it in a non-standard location makes it a bit more difficult for users to tweak libvirt vm settings.

Moving on about PCI devices, we are adding support for detaching these automatically so virsh command would not have to be used for this.
 

kinvaris

Cadet
Joined
Mar 11, 2021
Messages
2
@waqarahmed this is security by obscurity and users will always find a way to work around it. By using non-standard practices you are scaring away new users, existing linux administrators and companies who want to integrate your product in existing infra. IMO keep the system as standard as possible and you will have less unnecessary overhead, unnecessary complexity and better maintainability.
I don't know how much time this has cost the project, but I just patched the non-standard setup by created a simple symlink. Which apposed to @arnaudf his solution is much more cleaner and also integrates with other application e.g. virt-manager.
Code:
ln -s /run/truenas_libvirt/libvirt-sock /var/run/libvirt/libvirt-sock


Hello,

Same behavior here, because libvirt socket in truenas scale is not at default location.
Workaround : create an alias in $HOME/.zshrc
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
@kinvaris, this isn't security by obscurity. One of the design features of all TrueNAS editions is to have the middleware mediate administrative actions as much as possible. If a CLI tool like virsh changes the system state, the middleware won't be aware of it. There are essentially 2 ways of handling this:
  1. Polling the system state constantly, to catch changes made outside the middleware's knowledge. This has very severe performance penalties, so iX has chosen to:
  2. Override changes not made by the middleware, and try to limit access to tools like virsh.
 

E2zin

Dabbler
Joined
Mar 25, 2021
Messages
16
I faced the same issue described above. I am a good example of those neophyte users that will get stuck on something and looks for solutions to understand what is happening and maybe do the wrong things. But I also agree that hiding this also removes useful features like virsh nodedev-list pci, which was in the official documentation until early 2020.

I am trying to create a Ubuntu VM with GPU passthrough (nvidia GTX 670).
When I try to select a PCI device to passthrough, the list is empty, even though I have at least 2 USB controllers, 2 HBA with no disks used in TN yet, the nvidia GPU (and the onboard Matrox one, used by the console). I feel there might be something missing for me to mark some of those as available to be used? If it was possible, I would love to be able to also assign the onboard Matrox card to a VM (to serve as an always-on "dashboard").

I understand this is ALPHA software, and I want to help any way I can, but I am severely limited in my current knowledge of how the TrueNAS middleware works.

Thank you!
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@E2zin can you please confirm if you are able to see available PCI devices with "midclt call vm.device.passthrough_device_choices | jq " command ? I think there might be a bug in the UI. About GPU passthrough, that is not officially supported yet and we are working on adding support for it which should land some time soon :)
 
Status
Not open for further replies.
Top