VM "stuck" on

gpk

Cadet
Joined
Nov 19, 2020
Messages
5
I was just testing a VM in TrueNAS 12.0 but after I rebooted the VM it didn't come back up. The Web UI thinks the VM is up, but if I try to power off I get:

Code:
internal error: Child process (/usr/sbin/bhyvectl --destroy --vm=3_debian) unexpected exit status 1

Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1202, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1106, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 1575, in poweroff
    self.vms[vm_data['name']].poweroff()
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 273, in poweroff
    self.domain.destroy()
  File "/usr/local/lib/python3.8/site-packages/libvirt.py", line 1309, in destroy
    if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)
libvirt.libvirtError: internal error: Child process (/usr/sbin/bhyvectl --destroy --vm=3_debian) unexpected exit status 1


If I press "stop" I get:

Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 361, in run
    await self.future
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 399, in __run_body
    rv = await self.middleware.run_in_thread(self.method, *([self] + args))
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/run_in_thread.py", line 10, in run_in_thread
    return await self.loop.run_in_executor(self.run_in_thread_executor, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 1563, in stop
    vm.stop(vm_data['shutdown_timeout'])
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 251, in stop
    self.domain.shutdown()
  File "/usr/local/lib/python3.8/site-packages/libvirt.py", line 2659, in shutdown
    if ret == -1: raise libvirtError ('virDomainShutdown() failed', dom=self)
libvirt.libvirtError: An error occurred, but the cause is unknown


Running the same command in myself (/usr/sbin/bhyvectl --destroy --vm=3_debian) the error I get is:
Code:
VM:3_debian is not created.
 

gpk

Cadet
Joined
Nov 19, 2020
Messages
5
I cloned the VM and the clone booted fine but now I can't remove the old one.
 

Oclair

Cadet
Joined
Apr 18, 2017
Messages
9
I cloned the VM and the clone booted fine but now I can't remove the old one.
I also am unable to delete a VM

Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1191, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/service.py", line 471, in delete
    rv = await self.middleware._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1191, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 1456, in do_delete
    raise CallError('Unable to retrieve VM status. Failed to destroy VM')
middlewared.service_exception.CallError: [EFAULT] Unable to retrieve VM status. Failed to destroy VM
 

Oclair

Cadet
Joined
Apr 18, 2017
Messages
9
I also am unable to delete a VM

Umm I had to delete the associated objects assigned to the vm, isn't this kinda sloppy? should have that "built in" to the delete VM?
 

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
In my case the same issue was caused by lack of memory since the tunable value vfs.zfs.arc_max (ZFS Cache) was setup to 30GB while my system have 32GB in total, so there was no memory for VM. No idea why the value was increased after upgrade from v11 to v12.

Try to create first the VM via CLI:
Code:
/usr/sbin/bhyvectl --create --vm=3_debian


Then I Was able to poweroff the VM via GUI or via CLI using one of the following commands:
Code:
/usr/sbin/bhyvectl --destroy --vm=3_debian
/usr/sbin/bhyvectl --force-reset --vm=3_debian
/usr/sbin/bhyvectl --force-poweroff --vm=3_debian
 

gpk

Cadet
Joined
Nov 19, 2020
Messages
5
Interesting. I haven't got any tunables set. I also have 32GiB of memory so I doubt this is due to insufficient memory.

My understanding of the ZFS cache is that it the memory is "used", but always available for anything else that needs it. Is that incorrect?

Issuing the `bhyvectl --create --vm=3_debian` command did fix it, though. I didn't have to do anything else as I was then able to control the VM via the web UI again.

Still no idea how this happened, but at least I know how to fix it now.
 

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
As I wrote, in my case ZFS cache takes all memory and on VM GUI section I had:
Available Memory: 0.18 GiB - Caution: Allocating too much memory can slow the system or prevent VMs from running.
Plus I got some warnings when I try to start them. Also if I try to restart one VM, it hangs and give the same error it gives you.

I don't say that this is the same for you, but just look at memory usage, because tuning zfs cache solve the issue for me, now I can stop, start and restart VM without issue (I have 8 VM running). No idea if the memory usage is different between v11 and v12.

Yesterday I setup the vfs.zfs.arc_max value to 20 GB (maybe I can decrease to lower value since I have RAID Z2 with 4 disks each 4 TB). Then I reboot the server (not needed to tune that value anyway) and the memory is now allocated as:

Image 002.png


Before, the ZFS cache takes 28 GB and the rest was taken from Serivces.
 

gpk

Cadet
Joined
Nov 19, 2020
Messages
5
Mines not like that at all. My ZFS cache is currently 24GiB, but the VM page still says "Available Memory: 27.35 GiB".
 
Top