11.2 Beta1 Can't get VM's to Start.

Status
Not open for further replies.

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
I'm able to create a VM, But it will not start. All Devices are correct.

I'm tinkering to figure out if I can get it to work trying different configurations. I'll let you know.
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Also tested in both GUI's still can't get them to start.
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Found this in the middlewared.log. The VM I created is MainServer.


[2018/07/11 21:34:24] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 4
[2018/07/11 21:47:55] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9
[2018/07/11 21:48:49] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9
[2018/07/11 21:49:45] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9
[2018/07/11 21:50:44] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9
[2018/07/11 21:51:11] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9
[2018/07/11 21:51:45] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9
[2018/07/11 21:54:13] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 9


[2018/07/11 21:55:57] (WARNING) middlewared.devd_listen():93 - Failed to parse devd message: !system=ZFS subsystem=ZFS type=misc.fs.zfs.history_event pool_name=DataStore pool_guid=13067072719128896663 history_hostname=freenas.home.lan history_dsname=DataStore/VM/MainServer history_internal_str=refreservation=54529376256 history_internal_name=set history_dsid=421 history_txg=2775136 history_time=1531346157
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Hold off. I did a reboot and got it to start. I'm going to test creating another VM and see if it starts if not I'll reboot and see if it starts after that
 

jclendineng

Explorer
Joined
Mar 14, 2017
Messages
58
You need to :

1. Use virtio as opposed e1000 or the VM will crash constantly (virtio is much much faster and more stable anyways)
2. Go into VNC device settings and disable "Wait to Boot"
3. Make sure you have enough RAM, ZFS uses a lot of ram, as much as you give it, 8GB is required for freenas to run properly, more if you have VM's (I have 32)
4. UEFI issues with centos (possibly others) te fix is in the forums
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
You need to :

1. Use virtio as opposed e1000 or the VM will crash constantly (virtio is much much faster and more stable anyways)
2. Go into VNC device settings and disable "Wait to Boot"
3. Make sure you have enough RAM, ZFS uses a lot of ram, as much as you give it, 8GB is required for freenas to run properly, more if you have VM's (I have 32)
4. UEFI issues with centos (possibly others) te fix is in the forums

1. I always use VirtIO
2. I never enable Wait for Boot
3. I have tons of RAM
4. I don't run CentOS on Ubuntu Server.

I think it has something to do with me upgrading the ZFS after FreeNAS upgrade. All seems good so far.
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
After some testing. You can build a VM but it will not start. I got 1 to start after I rebooted FreeNAS now it stopped and I can't get it to start again. Not sure if this is a bug or something on my end.
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Anyway to get this to work without a fresh install?
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Wonder if I should create a Bug Ticket on this?
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Definitely, they actually tend to be very helpful there.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Anyway to get this to work without a fresh install?

I did some test with FN11.2 BETA using a VM. There are bugs using the new UI (see: https://forums.freenas.org/index.php?threads/multiple-vm-bugs-in-fn11-2-beta.68503/, but I could get linux VMs to work within FN11.2 BETA.

I wonder why your middleware log refers to history_hostname=freenas.home.lan history_dsname=DataStore/VM/MainServer when I'd expect you'd to be using zvols for your VMs. Out of interest, what was the guest OS?

Incidently, the FN11.2 was running under KVM with just 4GB of memory. The nested linux VM's were allocated 1GB. So it's a little odd to see those memory warning messages, what else was your system doing at the time?
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
I remember when the last version came out it used a newer version of rancher os which had to be upgraded in the vm before upgrading the os or something of that sort (even though you cant fully upgrade rancheros in a vm because of the way it’s setup, it was still necessary to get the VMs to work once you updated FreeNAS).
Could it be the same with this beta? Just search the forum, it was for the last update I believe
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I remember when the last version came out it used a newer version of rancher os which had to be upgraded in the vm before upgrading the os or something of that sort (even though you can't fully upgrade rancheros in a vm because of the way it’s setup, it was still necessary to get the VMs to work once you updated FreeNAS).
Could it be the same with this beta? Just search the forum, it was for the last update I believe

I don't think that's relevant at all, that was strictly a problem with "Docker Host" VMs. Here there appears to be a problem with memory allocation. Something new has been introduced in FN11.2

]Virtual Machines are more crash-resistant. When a guest is started, the amount of available memory is checked and an initialization error will occur if there is insufficient system resources. When a guest is stopped, its resources are returned to the system

You can see this in the middlewared.log when a running VM is stopped with guest memory being returned to the arc e.g:

Code:
[2018/07/18 08:04:45] (DEBUG) VMService.stop():386 - ===> Soft Stop VM: deb9 ID: 3 BHYVE_CODE: None
[2018/07/18 08:04:46] (DEBUG) VMService.run():253 - deb9: Unhandled ps2 keyboard command 0xf6

[2018/07/18 08:04:47] (DEBUG) VMService.run():253 - deb9: fbuf frame buffer base: 0x842e00000 [sz 16777216]

[2018/07/18 08:04:47] (DEBUG) VMService.run():253 - deb9: Waiting for vnc client...

[2018/07/18 08:04:47] (INFO) VMService.run():268 - ===> Powered off VM: deb9 ID: 3 BHYVE_CODE: 1
[2018/07/18 08:04:47] (ERROR) VMService.running():398 - ===> VMM deb9 is running without bhyve process.
[2018/07/18 08:04:47] (DEBUG) VMService.__teardown_guest_vmemory():298 - ===> Give back guest memory to ARC.: 1073741824
[2018/07/18 08:04:47] (WARNING) VMService.destroy_vm():281 - ===> Destroying VM: deb9 ID: 3 BHYVE_CODE: 1
[2018/07/18 08:04:47] (DEBUG) VMService.kill_bhyve_web():365 - ==> Killing WEBVNC: 4092



So if I allocate all available RAM to single VM you see this on the middlewared.log:

Code:
[2018/07/18 08:14:28] (WARNING) VMService.__init_guest_vmemory():830 - ===> Cannot guarantee memory for guest id: 3


In the New UI the VM appears to start and is shown as running when actually it is not. So this look like yet another bug.
 
Last edited:

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
You should create a ticket since you have a lot of data for them to look at and maybe figure out what's going on and fix it.
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
@KrissBee ahh that makes sense - the FreeNAS team applied a bug fix that they completed a while ago (before 11.1-U5 was released) that added a ‘seatbelt’ to the amount of memory a VM can use.
The whole point, to my understanding, was to address the issue that memory useage by VMs was resulting in a steadily increasing inactivity bucket (without it being refreshed/thrown into the laundry bucket) which would also result in a steady increase of swap memory/steady decrease of the wired memory bucket (some of the info seems to point to different block/page sizes used by the VMs when they are running a different OS).

I guess the reason they didn’t release the fix in 11.1 U5 was because it may be a pretty big rework.

From what I’ve seen it seems like 11.2 is actually using the laundry bucket now and recycling inactive memory, but some people are still getting swap useage.

I hope your info helps them iron out the kinks... I’ve been resorting to a series of tunables to work around this, but unfortunately so far I’ve only gotten the pagedaemon to clear the laundry bucket once before it reverts to the same old behavior.
 
Status
Not open for further replies.
Top