SCALE using only 50% of RAM for ZFS by default?

flashdrive

Patron
Joined
Apr 2, 2021
Messages
264
I am running the nightly build of Sep 9th

Out of the box, no tuning

A single SMB share which is receiving Data will not use more than 16 out of 32 GByte.
 
Last edited:

flashdrive

Patron
Joined
Apr 2, 2021
Messages
264
Last edited:

crkinard

Explorer
Joined
Oct 24, 2019
Messages
80
echo 59055800320 >> /sys/module/zfs/parameters/zfs_arc_max

I have this as a startup item.

59,055,800,320 Bytes = 55 Gigabytes
I have 64GB RAM with nothing really running.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
If we set the max to the same as our RAM, I expect this won't cause a problem because the ARC would be decreased in size dynamically for applications, similar to how the Linux kernel cache/buffer works? Can anyone confirm?
 

freqlabs

iXsystems
iXsystems
Joined
Jul 18, 2019
Messages
50
I don't consider tuning this higher than the default to be safe on SCALE. With any VMs configured, manual tuning can be overridden because middleware automatically adjusts the ARC max as needed. Besides that, the default max is the limit because the Linux kernel's allocator can end up using more memory than is physically installed beyond that point. See the commit message:
https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754

Going beyond 1/2 of physical memory may compromise system stability on Linux.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
I rarely run VMs, but do on occasion. I recently upgraded to 128 GB of RAM. If I limit my VMs' aggregation to 16 GB, would it be safe to increase my ARC limit to, say, 96 GB?

I hope they're able to fix the underlying kernel problem at some point so this limitation can be relaxed.
 
Last edited:

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Revisiting that issue ... I currently run TrueNAS Scale as a VM on my Proxmox system and gave it 32GB of RAM. TrueNAS always shows ∼45-50% not utilized. Of course I could make better use of allocating that RAM for other VMs in Proxmox:

1676549826233.png


Bildschirmfoto 2023-02-16 um 13.17.43.png


Inside TrueNAS Scale I do not run any VMs or Apps / Containers anymore.

Is there a safe workflow for that use case of how to expand the RAM usage closer to 90+%?
 
Joined
Jan 27, 2020
Messages
577

ohboi

Dabbler
Joined
Mar 23, 2019
Messages
26
I don't consider tuning this higher than the default to be safe on SCALE. With any VMs configured, manual tuning can be overridden because middleware automatically adjusts the ARC max as needed. Besides that, the default max is the limit because the Linux kernel's allocator can end up using more memory than is physically installed beyond that point. See the commit message:
https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754

Going beyond 1/2 of physical memory may compromise system stability on Linux.
Considering this, if I have 64GB of RAM and I specifically went to 64GB (my old NAS is running 32GB of RAM) to increase the ARC capacity, is it safe to say to improve the caching "experience" it is actually smarter to switch to TrueNAS Core?
 
Joined
Oct 22, 2019
Messages
3,641
Considering this, if I have 64GB of RAM and I specifically went to 64GB (my old NAS is running 32GB of RAM) to increase the ARC capacity, is it safe to say to improve the caching "experience" it is actually smarter to switch to TrueNAS Core?
There's no way to answer that, since many things are part of the decision-making process. What's more important to you?

The greater library of "Apps", the wider hardware support, the improvements to the GUI, the more focused development in SCALE? (Linux-based)

Or the larger ARC limit and sheer memory/ZFS capabilities of CORE? (FreeBSD-based)

Or maybe some sort of mix in between?

So maybe the TrueCharts library of Apps is more suited for you, even at the cost of a smaller ARC.
 

rigel

Dabbler
Joined
Apr 5, 2023
Messages
19
I have the same issue right now. I mainly have chose TrueNAS Scale out of fear that sometime in future TrueNAS Core support will end and also it feels like Scale receives more frequent updates.

Currently in Scale I have 256GB RAM, a couple of ZFS pools and a couple of VMs with dedicated RAM 8GB each. So I just end up with 112GB RAM being there with no use all the time

256GB - 128GB - 8GB - 8GB = 112GB
 

dirtyfreebooter

Explorer
Joined
Oct 3, 2020
Messages
72
here is my take on this.. i am also in realm where i don't run VMs and few services and i want to use all of my available RAM for Arc. The main issue is that ZFS arc on linux does not report its memory as buffer cache memory, see https://github.com/openzfs/zfs/issues/10251

that being said, there are 2 tunables that are of use here:
  • zfs_arc_max
  • zfs_arc_sys_free
By default, TrueNAS has zfs_arc_max as 0, which defaults to 50%. zfs_arc_sys_free is interesting because it tells zfs to keep at least this much system memory free. I have combined these successfully since the beginning of SCALE betas.

I made a POSTINIT script that sets the arc max to 90% of my memory and sets the sys_free parameter to 8GiB. so up to 90% of my RAM will get used for arc, while at the same time ensuring at least 8GiB of free memory, which works for me, since i have my few containers constrained, etc.

The only caveat to this approach, if you use VMs, when you start a VM, TrueNAS SCALE will override zfs_arc_max setting you set in POSTINIT script. I don't know of a way to around this, other than to have a cron script that keeps fighting middleware from overriding it... I dont use VMs, so this does not effect me.

Code:
#!/bin/sh

PATH="/bin:/sbin:/usr/bin:/usr/sbin:${PATH}"
export PATH

ARC_PCT="90"
ARC_BYTES=$(grep '^MemTotal' /proc/meminfo | awk -v pct=${ARC_PCT} '{printf "%d", $2 * 1024 * (pct / 100.0)}')
echo ${ARC_BYTES} > /sys/module/zfs/parameters/zfs_arc_max

SYS_FREE_BYTES=$((8*1024*1024*1024))
echo ${SYS_FREE_BYTES} > /sys/module/zfs/parameters/zfs_arc_sys_free


Resulting memory:
1686761228298.png


Post init script config:
1686761261788.png
 
Last edited:

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
That looks promising - I will try to implement that.
 

rigel

Dabbler
Joined
Apr 5, 2023
Messages
19
I made a POSTINIT script that sets the arc max to 90% of my memory and sets the sys_free parameter to 8GiB. so up to 90% of my RAM will get used for arc, while at the same time ensuring at least 8GiB of free memory, which works for me, since i have my few containers constrained, etc.

Thank you so much for your script. I'm trying to follow your method and have a question. Is it possible to place this .sh script in a /root folder on a boot drive and not in a zfs pool /mnt folder?

For example can I place this script here?
/root/postinit.sh
or
/postinit.sh

It is just on my zfs pool drives I keep only datasets and do not want to keep any configuration files.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is it possible to place this .sh script in a /root folder on a boot drive and not in a zfs pool /mnt folder?
yes

For example can I place this script here?
/root/postinit.sh
or
/postinit.sh
The first one, OK, not the second.
 

dirtyfreebooter

Explorer
Joined
Oct 3, 2020
Messages
72
keep in mind, that if you place the file in /root/postinit.sh and something happens to your NAS and you have to reinstall and restore from backup, that file will be lost and have to be recreated. the backup will preserve the "Init/Shutdown Script" configuration, but not the script itself. It is not odd to make a small dataset for things like this, in my instance, i have a dataset, "homelab" under pool0, i.e. /mnt/pool0/homelab that i keep scripts and tools and such that i wouldn't want lost if my boot drive fails, but is safe on raidz2 + snapshots + remote backups
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
keep in mind, that if you place the file in /root/postinit.sh and something happens to your NAS and you have to reinstall and restore from backup, that file will be lost and have to be recreated. the backup will preserve the "Init/Shutdown Script" configuration, but not the script itself.
Certainly worth noting.

/root is carried over with version upgrades, so it's only the case of boot pool loss that needs to be accounted for, but indeed should be considered.
 

rigel

Dabbler
Joined
Apr 5, 2023
Messages
19
keep in mind, that if you place the file in /root/postinit.sh and something happens to your NAS and you have to reinstall and restore from backup, that file will be lost and have to be recreated. the backup will preserve the "Init/Shutdown Script" configuration, but not the script itself. It is not odd to make a small dataset for things like this, in my instance, i have a dataset, "homelab" under pool0, i.e. /mnt/pool0/homelab that i keep scripts and tools and such that i wouldn't want lost if my boot drive fails, but is safe on raidz2 + snapshots + remote backups

Thank you for your reply! Good point. I ended up creating a special folder for this script on the zfs pool drive. Works great.
 
Top