winnielinnie
MVP
- Joined
- Oct 22, 2019
- Messages
- 3,641
4 GiB is working beautifully for me, even still to this day.What was your final 'reasonable floor'? 4GB? or +16GB?
I traversed my directory tree and watched the size of arc metadata.
Yet I don't seem to find it growing particularly much. I felt the NFS share directory traversal to become snappier, but expected to find arcstats.metadata_size to have at least approached the new floor of 4GB. But it does not, it stays at 1.35GB.
Mine doesn't go much past 2 GiB. The reason I'm sticking with 4 GiB is just in case I need the extra breathing room in the future. Technically, I could set mine to 2.1 GiB, and I would still be cruising.
In your case, all your metadata fits within 1.35 GiB (assuming there are no other filesystems on the TrueNAS server in question which haven't yet been traversed.) In my case, from three different clients (three different datasets), all the metadata fits within 2.1 GiB.
What setting mine to 4 GiB really means is: "I'm happy with metadata taking up to 4 GiB of the ARC for itself if it's needed." For now, it's only taking up about 2 GiB in the ARC, not 4 GiB. But should it ever need to take up 4 GiB, it has the means/permission to do so.
I suppose "floor" isn't the right term to use. A more accurate way to refer to it is perhaps "allowable ceiling before ZFS starts aggressively evicting metadata from the ARC."
I believe I tried those parameters as well, and found no difference. (As others from different communities shared the same grievances.)Additionally, I stumbled upon this, which might be of interest to your testing too:
The above tuneable you proposed is only a specific part of the metadata, but not "metadata in the ARC in general".
For what it's worth, finding the appropriate value for
vfs.zfs.arc.meta_min
does the trick, which you've also experienced yourself by traversing and listing directories via NFS (and I've noticed with rsync listings, as well as browsing SMB shares with folders containing thousands of files.)Setting it 4 GiB seems to be the best-case scenario. Here's why I believe that:
Metadata doesn't deal with raw user data (large amounts of pure user data, which requires much RAM), so for any system with 32+ GiB of memory, and typical use-case scenarios, it's unlikely their total metadata will exceed 4 GiB. Which means that for a 32 GiB system, if they somehow manage to saturate their entire 4 GiB in the ARC with pure metadata, that's still only 12% of their total system memory. For 64 GiB of RAM? About only 6% of total system memory. And that's the highest possible "cost" for a snappier system with a performance boost: immensely faster rsync tasks, much faster directory listings and metadata reads, and snappier browsing over NFS and SMB.
I consider it a "win".
Last edited: