l2arc_headroom performance impact

fa2k

Dabbler
Joined
Jan 9, 2022
Messages
34
This isn't exactly a question, I'm just sharing an interested parameter setting. I'm using TrueNAS Scale, on which the default values for L2ARC write parameters are:

l2arc_write_max: 8388608
l2arc_headroom: 2
(description here: https://openzfs.github.io/openzfs-docs/man/4/zfs.4.html)
Essentially, every second, ZFS will look at the two last 8 MB (8388608 byte) blocks of the ARC about to be evicted, and write them to L2ARC if they aren't already there.

I haven't done much benchmarking, but these parameters seem ridiculously low to me, for today's recommended RAM sizes and current drive speeds. I found a post from 2012 that agrees ;) https://www.truenas.com/community/threads/zfs-and-l2arc-tuning.7931/

Then I read the docs and found this interesting bit about l2arc_headroom :
ARC persistence across reboots can be achieved with persistent L2ARC by setting this parameter to 0, allowing the full length of ARC lists to be searched for cacheable content.
That's potentially very cool. If it can look at the *whole ARC*, then it doesn't matter that it writes only 8 MB/s. I'm just surprised that I could change this parameter effectlvely by a factor of several thousands (from covering 16 MB to something like 80 GB) and it not cause any weird performance issues (if it has to scan the full ARC every second). It's vulnerable to churn like when setting a really high l2arc_write_max, but that's fine in some situations (discussed in this presentation linked from the TrueNAS docs https://www.snia.org/sites/default/...ices_for_OpenZFS_L2ARC_in_the_Era_of_NVMe.pdf ).

I'm testing it now. After a reboot it may be faster than before, but I was expecting an even bigger difference. There's no excessive CPU usage so far, but the ARC size is only 6 GB so far. There's not as much writing to L2ARC as I would expect.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Cool!
Please report whatever you test and see what can be replicated elsewhere in the community.

I'm also currently interested in the ARC vs L2ARC dynamics.
I found this old thread - but still insightful - have you seen it?
 

fa2k

Dabbler
Joined
Jan 9, 2022
Messages
34
I hadn't seen that thread, thanks for pointing it out. So there's some mixed opinions on whether the 8M default is a good idea. I think it's because the L2ARC isn't itself an ARC but just a FIFO, and isn't very smart, it needs different tuning for different workloads.

My situation is a multi-purpose home server, with a varied workload (maybe more varied than many home users). The usage is very bursty and at the time (after a few days of uptime), basically everything has gone in the ARC, so I don't have anything to report on the L2ARC. I can say that it doesn't thrash the disks constantly with writes, though, so that's a good sign. Here's one out of three L2ARC disks - not much writing:
Screenshot 2022-05-28 142254.png


(edit: Some of the reason for getting into L2ARC tuning is that my ARC isn't performing as well as I hoped after I changed to TrueNAS SCALE. I'll make a separate post about this in the SCALE forum.)
 
Last edited:
Top