TrueNAS SCALE VMs not launching

bdgusa

Cadet
Joined
Apr 21, 2022
Messages
2
Hi there,
I've struggled for a while with getting VMs to launch on SCALE. Docker/K8s/Helm works fine with Apps, storage, networking, etc. all work.
When I go to launch a VM, irrespective of the OS (both install ISOs in the VM storage location), and irrespective of UEFI or Legacy BIOS, nothing happens. If I try to download logs, the file is empty. Serial console just hangs at "Connected to domain 'vmname'" with no interactive console.
If I launch without an ISO attached, I can interact with the serial console and the boot menu.

I'm a little lost on where to go next.

Searching for the VM name in the logs:
/var/log/syslog:Jun 9 07:22:12 truenas1 middlewared[1175]: libvirt: QEMU Driver error : Domain not found: no domain with matching name '1_test'
/var/log/syslog:Jun 9 07:23:19 truenas1 systemd[1]: Started Virtual Machine qemu-1-1test.

If I tail /var/log/middlewared.log while creating and starting new VM, ISO attached (now from different data location from the VM):
[2022/06/09 07:34:16] (DEBUG) VMDeviceService.update_device():106 - Creating ZVOL data/vm/newvm-d7tcua with volsize 32212254720
[2022/06/09 07:34:32] (DEBUG) middlewared.__set_guest_vmemory():22 - Setting ARC from 101388576768 to 97093609472
usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_lifecycle.py:105: RuntimeWarning: coroutine 'accepts.<locals>.wrap.<locals>.nf' was never awaited
self.start(id, {'overcommit': True})
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
[2022/06/09 07:36:52] (DEBUG) VMService.teardown_guest_vmemory():53 - Giving back guest memory to ARC: 101388576768
The /mnt/data/vm directory is empty. No VMs there.
Searching the filesystem for "*newvm*" yields:
  • VM config in /etc/libvirt/qemu/1_newvm.xml
  • /dev/data/vm/newvm-d7tcua
  • /dev/zvol/data/vm/newvm-d7tcua
  • /var/db/system/syslog-XXXX/log/libvirt/qemu/1_newvm.log
The last entry (log) contains:
2022-06-09 14:34:39.480+0000: starting up libvirt version: 7.0.0, package: 3 (Andrea Bolognani <eof@kiyuko.org> Fri, 26 Feb 2021 16:46:34 +0100), qemu version: 5.2.0Debian 1:5.2+dfsg-11+deb11u1, kernel: 5.10.93+truenas, hostname: truenas1.local
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
HOME=/var/lib/libvirt/qemu/domain-1-1_newvm \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-1_newvm/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-1_newvm/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-1_newvm/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=1_newvm,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-1_newvm/master-key.aes \
-machine pc-i440fx-5.2,accel=kvm,usb=off,dump-guest-core=off,memory-backend=pc.ram \
-cpu SandyBridge \
-m 4096 \
-object memory-backend-ram,id=pc.ram,size=4294967296 \
-overcommit mem-lock=off \
-smp 2,sockets=1,dies=1,cores=2,threads=1 \
-uuid 614a47c4-e277-4903-8678-448e1bfe8e8f \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=32,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-shutdown \
-boot strict=on \
-device nec-usb-xhci,id=usb,bus=pci.0,addr=0x4 \
-device ahci,id=sata0,bus=pci.0,addr=0x5 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 \
-blockdev '{"driver":"file","filename":"/mnt/data/iso/ubuntu-20.04.4-live-server-amd64.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-cd,bus=sata0.0,drive=libvirt-2-format,id=sata0-0-0,bootindex=1 \
-blockdev '{"driver":"host_device","filename":"/dev/zvol/data/vm/newvm-d7tcua","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=sata0.1,drive=libvirt-1-format,id=sata0-0-1,bootindex=2,write-cache=on \
-netdev tap,fd=36,id=hostnet0,vhost=on,vhostfd=37 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:a0:98:2b:e0:6e,bus=pci.0,addr=0x3 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev spicevmc,id=charchannel0,name=vdagent \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-vnc 0.0.0.0:0 \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,xres=1024,yres=768,bus=pci.0,addr=0x2 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/0 (label charserial0)
2022-06-09T14:36:51.542398Z qemu-system-x86_64: terminating on signal 15 from pid 2287222 (/usr/sbin/libvirtd)
2022-06-09 14:36:51.743+0000: shutting down, reason=destroyed





TrueNAS SCALE Version: 22.02.0.1 (Platform Generic)

Hardware:
Dell R620 with flashed H710.
2x Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
 

bdgusa

Cadet
Joined
Apr 21, 2022
Messages
2
This is what happens if I try to download logs via the UI:
2022/06/09 08:15:12] (ERROR) middlewared.job.run():424 - Job <bound method accepts.<locals>.wrap.<locals>.nf of <middlewared.plugins.filesystem.FilesystemService object at 0x7fc9dbdd8880>> failed
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 412, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 448, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/filesystem.py", line 364, in get
raise CallError(f'{path} is not a file')
middlewared.service_exception.CallError: [EFAULT] /var/log/libvirt/bhyve/1_newvm.log is not a file

I've also just updated TrueNAS SCALE to the latest available.
 

rich45

Dabbler
Joined
Jul 14, 2018
Messages
11
Gentlemen
rich here struggling with vm's not booting after install if I have read correctly the grub is not transferring or persistent to hard drive. this is occurring with all linux vm's
using
TrueNAS-SCALE-22.02.4
is there a easy way around this. sorry if I have not left enough information. hopefully you understand what I mean.

thank you
rich45
 

soleous

Dabbler
Joined
Apr 14, 2021
Messages
30
I know this might be silly questions because it looks like you know what you're doing, but is virtualization enabled on the hardware, like VT-d.

Also, you won't see a folder under your VM directory as it's a block-level volume, but you will see it within the 'zfs list' as you do within dev.

Edit: just noticed, this is an old thread. rich45, we need more information, I'd recommend creating a new thread rather than reviving an old thread, that doesn't look like its related to your issue.
 

Basserra

Dabbler
Joined
Sep 21, 2020
Messages
28
Gentlemen
rich here struggling with vm's not booting after install if I have read correctly the grub is not transferring or persistent to hard drive. this is occurring with all linux vm's
using
TrueNAS-SCALE-22.02.4
is there a easy way around this. sorry if I have not left enough information. hopefully you understand what I mean.

thank you
rich45
Hey Rich, I had this same problem. Linux VMs boot broken while Windows VMs were fine. But, please note I only use Debian VMs on TN Scale; I have not tested with Arch or RHEL (these on proxmox w/no issues).
I looked into fixing grub and came across this https://help.ubuntu.com/community/Boot-Repair I tried to fix it myself with no results, but just chucking in the Boot-Repair-Disk to the VMs 'CD-ROM' and using the auto/recommended fix repaired them nicely and now they all boot properly.
Give it a shot and good luck!
 
Top