Register for the iXsystems Community to get an ad-free experience

SOLVED `vfs.zfs.arc_max` tunable ignored?

Asday

Dabbler
Joined
Jan 6, 2015
Messages
17
I'm on FreeNAS 11.1-U7.

I previously had autotune enabled, then disabled it a long time ago when I read someone on this forum saying it was rubbish and only ever messed things up, while chasing down an issue with qBittorrent.

I recently added some RAM to the machine, and notices the ARC was maxing out in size WAY below what I would expect. I read up, and `sysctl -a vfs.zfs.arc_max` gave me back ~12.6G (in bytes). I found that there was an old tunable left by autotune, and changed it to 59G in bytes.

That did nothing, and more reading later, I found that a reboot was required, even though the manual says the exact opposite. I rebooted, and the value changed, but not to what I set. `sysctl -a vfs.zfs.arc_max` now outputs 20251258880, (roughly 18.9G in bytes). My tunable is still the number I set, it just appears that something along the line has decided "haha no dude you don't need that much" and taken control away from me. 18.9G is not my max RAM minus 1G, nor is it 5/8 of my max RAM.

I tried changing the tunable from a `sysctl` to a `Loader` type as https://www.freebsd.org/doc/handbook/zfs-advanced.html states it can be set in "/boot/loader.conf or /etc/sysctl.conf", and I don't know what I'm doing.

To hammer home the point that I don't know what I'm doing, I then did this:

Code:
[asday@freenas ~]$ sysctl vfs.zfs.arc_max=63350767616
vfs.zfs.arc_max: 20251258880
sysctl: vfs.zfs.arc_max=63350767616: Operation not permitted
[asday@freenas ~]$ sudo sysctl vfs.zfs.arc_max=63350767616

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

Password: hunter2
vfs.zfs.arc_max: 20251258880
sysctl: vfs.zfs.arc_max=63350767616: Invalid argument
[asday@freenas ~]$


I've seen other people get told off for not posting system specs, but I don't know what would be relevant here, nor am I even sure how to get it. https://i.imgur.com/OHKm5XS.png is my system information page, the motherboard is an ASUS P10S-M, the PSU is a Corsair 450W something or other (from memory), the hard drives are 2TB Seagate Barracuda, the boot drives are a pair of 8GiB SanDisk Cruzer blades (also from memory).

EDIT: Just gonna add more shell output as I find commands that look like they might be useful. First up:

Code:
[asday@freenas ~]$ zfs-stats -M -s -A
------------------------------------------------------------------------
sysctl: unknown oid 'kstat.zfs.misc.arcstats.l2_writes_hdr_miss'
sysctl: unknown oid 'kstat.zfs.misc.arcstats.recycle_miss'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.bogus_streams'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.colinear_hits'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.colinear_misses'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.reclaim_failures'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.reclaim_successes'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.streams_noresets'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.streams_resets'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.stride_hits'
sysctl: unknown oid 'kstat.zfs.misc.zfetchstats.stride_misses'
ZFS Subsystem Report                Sun May 19 01:35:10 2019
------------------------------------------------------------------------
System Memory Statistics:
    Physical Memory:            65429.81M
    Kernel Memory:                3754.64M
    DATA:                98.97%    3716.25M
    TEXT:                1.02%    38.39M
------------------------------------------------------------------------
ARC Misc:
    Deleted:                31
    Recycle Misses:                0
    Mutex Misses:                0
    Evict Skips:                0

ARC Size:
    Current Size (arcsize):        87.08%    16817.89M
    Target Size (Adaptive, c):    100.00%    19313.10M
    Min Size (Hard Limit, c_min):    12.50%    2414.13M
    Max Size (High Water, c_max):    ~8:1    19313.10M

ARC Size Breakdown:
    Recently Used Cache Size (p):    69.62%    13446.31M
    Freq. Used Cache Size (c-p):    30.37%    5866.79M

ARC Hash Breakdown:
    Elements Max:                294159
    Elements Current:        100.00%    294159
    Collisions:                5647
    Chain Max:                0
    Chains:                    5297

ARC Eviction Statistics:
    Evicts Total:                336384
    Evicts Eligible for L2:        89.64%    301568
    Evicts Ineligible for L2:    10.35%    34816
    Evicts Cached to L2:            0

ARC Efficiency
    Cache Access Total:            677185
    Cache Hit Ratio:        42.12%    285277
    Cache Miss Ratio:        57.87%    391908
    Actual Hit Ratio:        38.97%    263918

    Data Demand Efficiency:        39.57%
    Data Prefetch Efficiency:    10.16%

    CACHE HITS BY CACHE LIST:
      Anonymously Used:        7.48%    21359
      Most Recently Used (mru):    54.50%    155477
      Most Frequently Used (mfu):    38.01%    108441
      MRU Ghost (mru_ghost):    0.00%    0
      MFU Ghost (mfu_ghost):    0.00%    0

    CACHE HITS BY DATA TYPE:
      Demand Data:            7.42%    21175
      Prefetch Data:        4.89%    13963
      Demand Metadata:        84.89%    242190
      Prefetch Metadata:        2.78%    7949

    CACHE MISSES BY DATA TYPE:
      Demand Data:            8.25%    32337
      Prefetch Data:        31.49%    123439
      Demand Metadata:        25.11%    98418
      Prefetch Metadata:        35.13%    137714
------------------------------------------------------------------------
ZFS Tunable (sysctl):
    kern.maxusers=4425
    vfs.zfs.vol.immediate_write_sz=32768
    vfs.zfs.vol.unmap_sync_enabled=0
    vfs.zfs.vol.unmap_enabled=1
    vfs.zfs.vol.recursive=0
    vfs.zfs.vol.mode=2
    vfs.zfs.sync_pass_rewrite=2
    vfs.zfs.sync_pass_dont_compress=5
    vfs.zfs.sync_pass_deferred_free=2
    vfs.zfs.zio.dva_throttle_enabled=1
    vfs.zfs.zio.exclude_metadata=0
    vfs.zfs.zio.use_uma=1
    vfs.zfs.zil_slog_bulk=786432
    vfs.zfs.cache_flush_disable=0
    vfs.zfs.zil_replay_disable=0
    vfs.zfs.version.zpl=5
    vfs.zfs.version.spa=5000
    vfs.zfs.version.acl=1
    vfs.zfs.version.ioctl=7
    vfs.zfs.debug=0
    vfs.zfs.super_owner=0
    vfs.zfs.immediate_write_sz=32768
    vfs.zfs.min_auto_ashift=12
    vfs.zfs.max_auto_ashift=13
    vfs.zfs.vdev.queue_depth_pct=1000
    vfs.zfs.vdev.write_gap_limit=4096
    vfs.zfs.vdev.read_gap_limit=32768
    vfs.zfs.vdev.aggregation_limit=1048576
    vfs.zfs.vdev.trim_max_active=64
    vfs.zfs.vdev.trim_min_active=1
    vfs.zfs.vdev.scrub_max_active=2
    vfs.zfs.vdev.scrub_min_active=1
    vfs.zfs.vdev.async_write_max_active=10
    vfs.zfs.vdev.async_write_min_active=1
    vfs.zfs.vdev.async_read_max_active=3
    vfs.zfs.vdev.async_read_min_active=1
    vfs.zfs.vdev.sync_write_max_active=10
    vfs.zfs.vdev.sync_write_min_active=10
    vfs.zfs.vdev.sync_read_max_active=10
    vfs.zfs.vdev.sync_read_min_active=10
    vfs.zfs.vdev.max_active=1000
    vfs.zfs.vdev.async_write_active_max_dirty_percent=60
    vfs.zfs.vdev.async_write_active_min_dirty_percent=30
    vfs.zfs.vdev.mirror.non_rotating_seek_inc=1
    vfs.zfs.vdev.mirror.non_rotating_inc=0
    vfs.zfs.vdev.mirror.rotating_seek_offset=1048576
    vfs.zfs.vdev.mirror.rotating_seek_inc=5
    vfs.zfs.vdev.mirror.rotating_inc=0
    vfs.zfs.vdev.trim_on_init=1
    vfs.zfs.vdev.bio_delete_disable=0
    vfs.zfs.vdev.bio_flush_disable=0
    vfs.zfs.vdev.cache.bshift=16
    vfs.zfs.vdev.cache.size=0
    vfs.zfs.vdev.cache.max=16384
    vfs.zfs.vdev.metaslabs_per_vdev=200
    vfs.zfs.vdev.trim_max_pending=10000
    vfs.zfs.txg.timeout=5
    vfs.zfs.trim.enabled=1
    vfs.zfs.trim.max_interval=1
    vfs.zfs.trim.timeout=30
    vfs.zfs.trim.txg_delay=32
    vfs.zfs.space_map_blksz=4096
    vfs.zfs.spa_min_slop=134217728
    vfs.zfs.spa_slop_shift=5
    vfs.zfs.spa_asize_inflation=24
    vfs.zfs.deadman_enabled=1
    vfs.zfs.deadman_checktime_ms=5000
    vfs.zfs.deadman_synctime_ms=1000000
    vfs.zfs.debug_flags=0
    vfs.zfs.debugflags=0
    vfs.zfs.recover=0
    vfs.zfs.spa_load_verify_data=1
    vfs.zfs.spa_load_verify_metadata=1
    vfs.zfs.spa_load_verify_maxinflight=10000
    vfs.zfs.ccw_retry_interval=300
    vfs.zfs.check_hostid=1
    vfs.zfs.mg_fragmentation_threshold=85
    vfs.zfs.mg_noalloc_threshold=0
    vfs.zfs.condense_pct=200
    vfs.zfs.metaslab.bias_enabled=1
    vfs.zfs.metaslab.lba_weighting_enabled=1
    vfs.zfs.metaslab.fragmentation_factor_enabled=1
    vfs.zfs.metaslab.preload_enabled=1
    vfs.zfs.metaslab.preload_limit=3
    vfs.zfs.metaslab.unload_delay=8
    vfs.zfs.metaslab.load_pct=50
    vfs.zfs.metaslab.min_alloc_size=33554432
    vfs.zfs.metaslab.df_free_pct=4
    vfs.zfs.metaslab.df_alloc_threshold=131072
    vfs.zfs.metaslab.debug_unload=0
    vfs.zfs.metaslab.debug_load=0
    vfs.zfs.metaslab.fragmentation_threshold=70
    vfs.zfs.metaslab.gang_bang=16777217
    vfs.zfs.free_bpobj_enabled=1
    vfs.zfs.free_max_blocks=18446744073709551615
    vfs.zfs.zfs_scan_checkpoint_interval=7200
    vfs.zfs.zfs_scan_legacy=0
    vfs.zfs.no_scrub_prefetch=0
    vfs.zfs.no_scrub_io=0
    vfs.zfs.resilver_min_time_ms=3000
    vfs.zfs.free_min_time_ms=1000
    vfs.zfs.scan_min_time_ms=1000
    vfs.zfs.scan_idle=50
    vfs.zfs.scrub_delay=4
    vfs.zfs.resilver_delay=2
    vfs.zfs.top_maxinflight=32
    vfs.zfs.delay_scale=500000
    vfs.zfs.delay_min_dirty_percent=60
    vfs.zfs.dirty_data_sync=67108864
    vfs.zfs.dirty_data_max_percent=10
    vfs.zfs.dirty_data_max_max=4294967296
    vfs.zfs.dirty_data_max=4294967296
    vfs.zfs.max_recordsize=1048576
    vfs.zfs.default_ibs=17
    vfs.zfs.default_bs=9
    vfs.zfs.zfetch.array_rd_sz=1048576
    vfs.zfs.zfetch.max_idistance=67108864
    vfs.zfs.zfetch.max_distance=33554432
    vfs.zfs.zfetch.min_sec_reap=2
    vfs.zfs.zfetch.max_streams=8
    vfs.zfs.prefetch_disable=0
    vfs.zfs.send_holes_without_birth_time=1
    vfs.zfs.mdcomp_disable=0
    vfs.zfs.per_txg_dirty_frees_percent=30
    vfs.zfs.nopwrite_enabled=1
    vfs.zfs.dedup.prefetch=1
    vfs.zfs.dbuf_cache_lowater_pct=10
    vfs.zfs.dbuf_cache_hiwater_pct=10
    vfs.zfs.dbuf_cache_shift=5
    vfs.zfs.dbuf_cache_max_bytes=632851840
    vfs.zfs.arc_min_prescient_prefetch_ms=6
    vfs.zfs.arc_min_prfetch_ms=1
    vfs.zfs.l2c_only_size=0
    vfs.zfs.mfu_ghost_data_esize=0
    vfs.zfs.mfu_ghost_metadata_esize=0
    vfs.zfs.mfu_ghost_size=0
    vfs.zfs.mfu_data_esize=1013981184
    vfs.zfs.mfu_metadata_esize=1543413248
    vfs.zfs.mfu_size=3404158976
    vfs.zfs.mru_ghost_data_esize=0
    vfs.zfs.mru_ghost_metadata_esize=0
    vfs.zfs.mru_ghost_size=0
    vfs.zfs.mru_data_esize=13438197760
    vfs.zfs.mru_metadata_esize=208108544
    vfs.zfs.mru_size=14097149440
    vfs.zfs.anon_data_esize=0
    vfs.zfs.anon_metadata_esize=0
    vfs.zfs.anon_size=2409472
    vfs.zfs.l2arc_norw=0
    vfs.zfs.l2arc_feed_again=1
    vfs.zfs.l2arc_noprefetch=0
    vfs.zfs.l2arc_feed_min_ms=200
    vfs.zfs.l2arc_feed_secs=1
    vfs.zfs.l2arc_headroom=2
    vfs.zfs.l2arc_write_boost=40000000
    vfs.zfs.l2arc_write_max=10000000
    vfs.zfs.arc_meta_limit=5062814720
    vfs.zfs.arc_free_target=113212
    vfs.zfs.compressed_arc_enabled=1
    vfs.zfs.arc_grow_retry=60
    vfs.zfs.arc_shrink_shift=7
    vfs.zfs.arc_average_blocksize=8192
    vfs.zfs.arc_no_grow_shift=5
    vfs.zfs.arc_min=2531407360
    vfs.zfs.arc_max=20251258880
    vm.kmem_size=21325000704
    vm.kmem_size_scale=1
    vm.kmem_size_min=0
    vm.kmem_size_max=1319413950874
------------------------------------------------------------------------
 

Asday

Dabbler
Joined
Jan 6, 2015
Messages
17
Good afternoon, future intrepid Googlers.

I needed to adjust the tunable `vm.kmem_size` as well. I set that to 59GiB in bytes as well, rebooted, and now it works fine. Well, I assume so. The commands output numbers I'd expect now.

The ARC hasn't had enough time to grow, yet.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Really cool how this forum has the same password hiding feature as IRC.
I guess the OP has already changed the password ;) since the password is still visible here.

Sent from my phone
 

Asday

Dabbler
Joined
Jan 6, 2015
Messages
17
It worked.

@pro lamer :^) :^) :^) :^) :^) :^) :^) :^) :^) :^) :^)
 

Attachments

  • Screenshot from 2019-05-20 10-48-36.png
    Screenshot from 2019-05-20 10-48-36.png
    19.2 KB · Views: 974
Top