ARC size on Linux smaller by default - tuning zfs_arc_max?

fa2k

Dabbler
Joined
Jan 9, 2022
Messages
34
Default ARC sizes are different on Linux and FreeBSD: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Module Parameters.html#zfs-arc-max

  • Linux: 1/2 of system memory
  • FreeBSD: the larger of all_system_memory - 1GB and 5/8 × all_system_memory
For a NAS with over 32GB RAM, I don't think the Linux formula is a great fit.
Screenshot 2022-05-28 153207.png

My ARC is capped at 50 GB and there's a bunch of wasted memory (and an obscenely large "services" - but I do have a 16 GB VM).

I'm tempted to set it to ca 100 GB based on this. Any good reason to avoid that / do anyone know why it's set to 50% on TrueNAS SCALE? Maybe just because it's the default on Linux?'

EDIT (to update):
Code:
# cat /sys/module/zfs/parameters/zfs_arc_max
53574109184

Something (TrueNAS?) is setting it to a lower value than the Linux default, which should either be "0" or about 68719476736, depending on whether it evaluates the formula. WTF ?
 
Last edited:

freqlabs

iXsystems
iXsystems
Joined
Jul 18, 2019
Messages
50
I don't consider tuning this higher than the default to be safe on SCALE. With any VMs configured, manual tuning can be overridden because middleware automatically adjusts the ARC max as needed. Besides that, the default max is the limit because the Linux kernel's allocator can end up using more memory than is physically installed beyond that point. See the commit message:
https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754

Going beyond 1/2 of physical memory may compromise system stability on Linux.
 

fa2k

Dabbler
Joined
Jan 9, 2022
Messages
34
Thanks freqlabs, good to know that there's a reason for the 50% default. Totally makes sense to set a conservative default so it doesn't go around crashing clean systems. I think it's pretty disappointing to stay with 50% on a NAS though, as RAM is often touted as a silver bullet for performance on ZFS (and in my experience, the caching works really well). I've adjusted it down based on what you said.

It seems hard to extract stats about the ZFS SLAB usage (based on posts like this https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSonLinuxMemoryWhere), but do you know of a way to monitor the overall SLAB fragmentation / usage? In most TrueNAS systems a large percentage of kernel memory will be ARC - so the overall usage could give a hint as to the actual ARC fragmentation, and permit a more aggressive tuning.

And thanks for explaining about the ARC being adjusted due to VMs - it explains why it's 50 GB not 64.
 

freqlabs

iXsystems
iXsystems
Joined
Jul 18, 2019
Messages
50
I don't see any way to monitor the fragmentation. General usage can be monitored by tools such as vmstat -m or slabtop, among other means.
 
Last edited:

fa2k

Dabbler
Joined
Jan 9, 2022
Messages
34
Thanks for the reply! vmstat is installed on TrueNAS, but the output is a bit hard to understand. I guess I have to experiment with it if I want to go beyond 50%. It's not a production system so it's okay if it goes down.
 

ZXCVBN

Cadet
Joined
Jul 27, 2022
Messages
3
Very interesting. Is the expectation that this will be addressed at some point so we can go above 50% of RAM for the ARC on SCALE (if no VMs are running)? I've reverted to Core in the meantime.
 

freqlabs

iXsystems
iXsystems
Joined
Jul 18, 2019
Messages
50
I don't think middleware will touch the tunable if there are no running VMs. Addressing the underlying issue with the allocator is not likely to happen any time soon, if ever.
 

crkinard

Explorer
Joined
Oct 24, 2019
Messages
80
The default setting being conservative is perfectly fine. I seriously doubt the TrueNAS group wants to get blamed for system crashes and the like.

Besides it's not like its rocket science to change the usage. Hell don't even need to restart to make it apply.
 

crkinard

Explorer
Joined
Oct 24, 2019
Messages
80
Very interesting. Is the expectation that this will be addressed at some point so we can go above 50% of RAM for the ARC on SCALE (if no VMs are running)? I've reverted to Core in the meantime.
Uh you can change it. Takes 2 seconds. Boosted mine to around 75% of ram considering the only thing I have running on it is Plex.
 

WN1X

Explorer
Joined
Dec 2, 2019
Messages
77
It is only unsafe if you run out of free memory. With careful monitoring of app, VM, and other service usage, it should be relatively easy to find a good value. I am currently setting it to 75% of the 64GB of ram in my server.
 

freqlabs

iXsystems
iXsystems
Joined
Jul 18, 2019
Messages
50
It's not so much about being out of free memory as it is about fragmentation making the free memory you have unusable.
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
The command outputs "0" in my VM running on Proxmox

Code:
admin@truenas[~]$ cat /sys/module/zfs/parameters/zfs_arc_max
0


I am running only TrueNAS Scale as NAS, no VMs, no Apps. I currently have 64 GiB assigned from the Proxmox host machine:
Bildschirmfoto 2023-06-14 um 10.45.49.png


I would like to increase that to use more like 75-80% of RAM?
 

fa2k

Dabbler
Joined
Jan 9, 2022
Messages
34
The command outputs "0" in my VM running on Proxmox

Code:
admin@truenas[~]$ cat /sys/module/zfs/parameters/zfs_arc_max
0



I would like to increase that to use more like 75-80% of RAM?
The pie chart is not necessarily a good representation of the actual usage, as it doesn't account for fragmentation. That said, if you want to experiment with increasing it, this is what I do:
Go to System Settings, Advanced, and add an Init/Shutdown script. Then you can add a command to set it when you reboot:

1687373052929.png
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,107
I am running only TrueNAS Scale as NAS, no VMs, no Apps.
[…]
I would like to increase that to use more like 75-80% of RAM?
There's an easy and safe way for that: Use CORE!
 
Top