VMs doesn't work until upgrading from 11.2 to 12.0-U2

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
Hi,

I decided to upgrade my system after learning that Freenas and Truenas is finally merged..

The upgrade was without any problem.. Upgraded first from 11.2 to 11.2-U8 to 11.3-U5 and at last to 12.U-U2...

After upgrading I upgraded my jail to 12.0 also with iocage upgrade -r 12.0-RELEASE jail command.. It was almost painless. There were some confirmations and editing but finally it also upgraded successfully. Although I realized there was a problem actually:

My VMs was not working. Especially one of them was important because it was rancheros which had around 10 docker instances running all the time.

When I tried to start it I got this error:
Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 1595, in start
    self.vms[vm['name']].start(vm_data=vm)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/vm.py", line 166, in start
    if self.domain.isActive():
  File "/usr/local/lib/python3.8/site-packages/libvirt.py", line 1566, in isActive
    if ret == -1: raise libvirtError ('virDomainIsActive() failed', dom=self)
libvirt.libvirtError: internal error: client socket is closed


I checked some posts in here but the error message was different on other cases.

What could be the problem?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
12.x now allows setting the number of cores per VM. Make sure your sockets*cores*threads isn't greater than your physical CPU's capacity.
 

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
Unfortunately that's not the case. I checked there also, and changed to 1 CPU, 2 core, 1 thread. The error is same. I should have been able to start at least one VM, although I cannot run any of them. The other one was a tiny debian machine with 1 cpu/1 core/1 thread I created for testing. None of them works.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I would try to recreate the VM and simply assign the proper zvol to the virtual disk device. After all all the state you are interested in is probably in that zvol. Probably copy over the MAC address for the virtual network, too.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Please provide the output of midclt call vm.query | jq.
 

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
Here is the output:

Code:
[
  {
    "id": 2,
    "name": "rancheros",
    "description": "",
    "vcpus": 1,
    "memory": 8192,
    "autostart": true,
    "time": "LOCAL",
    "grubconfig": "/mnt/media/.bhyve_containers/configs/2_rancheros/grub/grub.cfg",
    "bootloader": "GRUB",
    "cores": 2,
    "threads": 1,
    "shutdown_timeout": 90,
    "devices": [
      {
        "id": 3,
        "dtype": "NIC",
        "attributes": {
          "type": "E1000",
          "mac": "00:a0:98:33:54:f1",
          "nic_attach": "igb0"
        },
        "order": 1002,
        "vm": 2
      },
      {
        "id": 4,
        "dtype": "RAW",
        "attributes": {
          "path": "/mnt/media/VM/rancher_rancheros",
          "type": "AHCI",
          "boot": true,
          "size": 21474836480,
          "logical_sectorsize": null,
          "physical_sectorsize": null
        },
        "order": 1001,
        "vm": 2
      }
    ],
    "status": {
      "state": "ERROR",
      "pid": null,
      "domain_state": "ERROR"
    }
  },
  {
    "id": 8,
    "name": "ubuntu",
    "description": "",
    "vcpus": 2,
    "memory": 1024,
    "autostart": true,
    "time": "LOCAL",
    "grubconfig": null,
    "bootloader": "UEFI",
    "cores": 1,
    "threads": 1,
    "shutdown_timeout": 90,
    "devices": [
      {
        "id": 27,
        "dtype": "NIC",
        "attributes": {
          "type": "E1000",
          "mac": "00:a0:98:14:12:71",
          "nic_attach": "igb0"
        },
        "order": 1003,
        "vm": 8
      },
      {
        "id": 28,
        "dtype": "DISK",
        "attributes": {
          "path": "/dev/zvol/media/VM/debian-bjc5t",
          "type": "AHCI",
          "physical_sectorsize": null,
          "logical_sectorsize": null
        },
        "order": 1001,
        "vm": 8
      },
      {
        "id": 30,
        "dtype": "VNC",
        "attributes": {
          "vnc_port": 5900,
          "wait": true,
          "vnc_resolution": "800x600",
          "vnc_bind": "192.168.1.250",
          "vnc_password": "",
          "vnc_web": true
        },
        "order": 1002,
        "vm": 8
      }
    ],
    "status": {
      "state": "ERROR",
      "pid": null,
      "domain_state": "ERROR"
    }
  },
  {
    "id": 9,
    "name": "rancherv2",
    "description": "ranchverv2",
    "vcpus": 1,
    "memory": 8192,
    "autostart": true,
    "time": "LOCAL",
    "grubconfig": null,
    "bootloader": "UEFI",
    "cores": 2,
    "threads": 1,
    "shutdown_timeout": 90,
    "devices": [
      {
        "id": 33,
        "dtype": "NIC",
        "attributes": {
          "type": "E1000",
          "mac": "00:a0:98:30:83:45",
          "nic_attach": "igb0"
        },
        "order": 1002,
        "vm": 9
      },
      {
        "id": 34,
        "dtype": "DISK",
        "attributes": {
          "path": "/dev/zvol/jail/rancherv2-qkm5a",
          "type": "AHCI",
          "physical_sectorsize": null,
          "logical_sectorsize": null
        },
        "order": 1001,
        "vm": 9
      },
      {
        "id": 35,
        "dtype": "CDROM",
        "attributes": {
          "path": "/mnt/media2/Apps/turnkey-core-16.0-buster-amd64.iso"
        },
        "order": 1000,
        "vm": 9
      },
      {
        "id": 36,
        "dtype": "VNC",
        "attributes": {
          "wait": false,
          "vnc_port": 36098,
          "vnc_resolution": "1024x768",
          "vnc_bind": "0.0.0.0",
          "vnc_password": "",
          "vnc_web": true
        },
        "order": 1002,
        "vm": 9
      }
    ],
    "status": {
      "state": "ERROR",
      "pid": null,
      "domain_state": "ERROR"
    }
  }
]
 

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
I think there is something wrong with VM.. Because even if I try to create a new VM, there are more problems.. If it doesn't work, I'll install Truenas from zero and import my volumes.. I plan to move it under proxmox anyway.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How much available memory do you show in Virtual Machines? The VM definitions look OK, so they should start. Also, check the required kernel modules started: kldstat, and look for vmm.ko and nmdm.ko.
 
Last edited:

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
I have 48 GB total memory and I gave 8 GBs of memory to rancher.. I seem to have 31GB wired ( free to use ) memory, also swap is completely unused. I checked with kldstat, everything looks ok:
Code:
Id Refs Address                Size Name
 1   95 0xffffffff80200000  25af7f8 kernel
 2    1 0xffffffff827b0000   658498 openzfs.ko
 3    1 0xffffffff82e09000   2253c0 if_qlxgbe.ko
 4    1 0xffffffff8302f000    33288 if_bnxt.ko
 5    1 0xffffffff83064000   100fb0 ispfw.ko
 6    1 0xffffffff83165000    11a98 ipmi.ko
 7    2 0xffffffff83177000     2ef0 smbus.ko
 8    1 0xffffffff83319000     2220 uplcom.ko
 9    1 0xffffffff8331c000   537420 vmm.ko
10    1 0xffffffff83854000      afc nmdm.ko
11    1 0xffffffff83855000      2ea dtraceall.ko
12    9 0xffffffff83856000     75a8 opensolaris.ko
13    9 0xffffffff8385e000    3be70 dtrace.ko
14    1 0xffffffff8389a000      5f8 dtmalloc.ko
15    1 0xffffffff8389b000     18c0 dtnfscl.ko
16    1 0xffffffff8389d000     1fa1 fbt.ko
17    1 0xffffffff8389f000    547c0 fasttrap.ko
18    1 0xffffffff838f4000      b98 sdt.ko
19    1 0xffffffff838f5000     70f4 systrace.ko
20    1 0xffffffff838fd000     707c systrace_freebsd32.ko
21    1 0xffffffff83905000      f8c profile.ko
22    1 0xffffffff83906000     4718 geom_multipath.ko
23    1 0xffffffff8390b000    1121c hwpmc.ko
24    1 0xffffffff8391d000    13410 t4_tom.ko
25    1 0xffffffff83931000      c7e toecore.ko
26    1 0xffffffff83932000    3c4c0 linux.ko
27    3 0xffffffff8396f000     4b80 linux_common.ko
28    1 0xffffffff83974000    35ce0 linux64.ko
29    1 0xffffffff839aa000     5510 linprocfs.ko
30    1 0xffffffff839b0000      acf mac_ntpd.ko
 

hillpig

Cadet
Joined
Sep 1, 2021
Messages
5
Hi,

Do you guys have the workaround? I encountered a similar issue after the upgrade from 11.3U3.2 to 12.0U5.1.

Thank you!
 
Top