11.2 Beta - ARC throttled when using Virtual Machines

Status
Not open for further replies.

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
F11.2 Betas have introduced a change in system behaviour that may not have been on people’s radar. It’s neither mentioned nor explained in the FreeNAS 11.2. Beta guide, but it was included in the Beta announcements on the iXsystems blog. ( See: https://www.ixsystems.com/blog/library/freenas-11-2-beta1/)

Virtual Machines are more crash-resistant. When a guest is started, the amount of available memory is checked and an initialization error will occur if there is insufficient system resources. When a guest is stopped, its resources are returned to the system.

You may have already seen an error message when trying to start a virtual machine in the Betas.

Until now, the end user could expect FreeNAS to install and run with sane defaults, one of which is the fixed value of the upper memory limit of the ARC, shown by sysctl vfs.zfs.arc_max. The end user was left to make sensible choices about the memory allocated to any virtual machines while the operating system managed the competing memory demands of the ARC, running VMs, etc. Overprovision the VM memory allocation and VM performance could degrade as the ARC grows with i/o activity. Swapping can occur as available RAM is exhausted and the whole system grinds to a halt.

F11.2 attempts to balance the demand between the ARC and VM memory allocation by throttling the ARC’s upper memory limit. The bottom line is that the value of vfs.zfs.arc_max is no longer fixed if you run virtual machines.

So how is this working in practice and are there any pitfalls? This is where your feedback is needed. One obvious problem is that a VM configuration that runs on F11.1 may fail to meet the criteria of “sufficient systems resources” as defined by this new VM memory check. VMs that run under F11.1 may not run start under F11.2. In F11.2, is it also now possible that a single given VM my start in some circumstances, but not others?

This new “memory check” can be found in /usr/local/lib/python3.6/site-packages/middlewared/plugins/vm.py lines 790 to 860. Just how was that 35% factor arrived at? Hard coding such a value means it’s immutable as far as the end user is concerned. AFAIK , the code is the only place that defines the terms RNP, PRD and RPRD. These mystery labels appear on New UI Virtual Memory Summary page when you select "table" view,. along with two meaningless circular visual indicators. What the end user really needs is meaningful feedback about the storage consumed by VMs versus what is available and feedback about the memory allocated to VMs versus available memory.


REF: vm.py code at github

https://github.com/freenas/freenas/...60b/src/middlewared/middlewared/plugins/vm.py
 

Attachments

  • vm_memfail.jpeg
    vm_memfail.jpeg
    28.1 KB · Views: 403
  • vm_sum1.jpeg
    vm_sum1.jpeg
    50.5 KB · Views: 387
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I like how those cryptic labels are explained in the code but nobody thought to put that in the UI. Classic iXSystems. At least they're trying to implement some sort of resource management...

FWY:
Code:
		The total amount of virtual memory in MB used by guests
			 Returns a dict with the following information:
				 RNP - Running but not provisioned
				 PRD - Provisioned but not running
				 RPRD - Running and provisioned

@KrisBee Thanks for doing the legwork.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Over the last years we developed the habit of limiting the ARC to 1/4 of physical memory in our data centre and 1/2 the memory on my home NAS to have guaranteed breathing room for iocage jails (data centre) and bhyve VMs (home).

Does that new feature imply I should remove that tunable from my loader.conf now to try the new goodness? ;)

Thanks
Patrick
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@Patrick M. Hausen Good question, how does this new VM "memory check" interact with any user set tunables? Check the code, but from my understanding the starting point when deciding if there avail resource to run a VM is based on the values returned by sysctl hw.usermem and sysctl vfs.zfs.arc_max at the time the VM is started. AFAIK, this only applies to byhve VMs, so yes if you want to test against default settings at home, remove that loader.conf tunable.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
OK - from reading the code I think I understand what they are trying to achieve. And since I do not overprovision physical memory for VMs this makes perfect sense to me. I'll remove the tuneable and see how it works out.

Patrick
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Done ... all VMs running. Let's wait and see ...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Status
Not open for further replies.
Top