L2 Arc Expiry?

kspare

Guru
Joined
Feb 19, 2015
Messages
508
kstat.zfs.misc.arcstats.abd_chunk_waste_size: 5632
kstat.zfs.misc.arcstats.cached_only_in_progress: 99
kstat.zfs.misc.arcstats.arc_raw_size: 0
kstat.zfs.misc.arcstats.arc_sys_free: 0
kstat.zfs.misc.arcstats.arc_need_free: 0
kstat.zfs.misc.arcstats.demand_hit_prescient_prefetch: 0
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch: 33508590
kstat.zfs.misc.arcstats.async_upgrade_sync: 862905
kstat.zfs.misc.arcstats.arc_meta_min: 16777216
kstat.zfs.misc.arcstats.arc_meta_max: 16263419336
kstat.zfs.misc.arcstats.arc_dnode_limit: 16944617548
kstat.zfs.misc.arcstats.arc_meta_limit: 169446175488
kstat.zfs.misc.arcstats.arc_meta_used: 12391619360
kstat.zfs.misc.arcstats.arc_prune: 0
kstat.zfs.misc.arcstats.arc_loaned_bytes: 0
kstat.zfs.misc.arcstats.arc_tempreserve: 34816
kstat.zfs.misc.arcstats.arc_no_grow: 1
kstat.zfs.misc.arcstats.memory_available_bytes: 2445938688
kstat.zfs.misc.arcstats.memory_free_bytes: 8151232512
kstat.zfs.misc.arcstats.memory_all_bytes: 274707058688
kstat.zfs.misc.arcstats.memory_indirect_count: 0
kstat.zfs.misc.arcstats.memory_direct_count: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.l2_rebuild_log_blks: 40274
kstat.zfs.misc.arcstats.l2_rebuild_bufs_precached: 3
kstat.zfs.misc.arcstats.l2_rebuild_bufs: 41160028
kstat.zfs.misc.arcstats.l2_rebuild_asize: 1998282268672
kstat.zfs.misc.arcstats.l2_rebuild_size: 2689376109056
kstat.zfs.misc.arcstats.l2_rebuild_lowmem: 0
kstat.zfs.misc.arcstats.l2_rebuild_cksum_lb_errors: 0
kstat.zfs.misc.arcstats.l2_rebuild_dh_errors: 0
kstat.zfs.misc.arcstats.l2_rebuild_io_errors: 0
kstat.zfs.misc.arcstats.l2_rebuild_unsupported: 0
kstat.zfs.misc.arcstats.l2_rebuild_success: 1
kstat.zfs.misc.arcstats.l2_data_to_meta_ratio: 1301
kstat.zfs.misc.arcstats.l2_log_blk_count: 85220
kstat.zfs.misc.arcstats.l2_log_blk_asize: 1781460992
kstat.zfs.misc.arcstats.l2_log_blk_avg_asize: 20478
kstat.zfs.misc.arcstats.l2_log_blk_writes: 426816
kstat.zfs.misc.arcstats.l2_hdr_size: 5758779840
kstat.zfs.misc.arcstats.l2_asize: 1688927885824
kstat.zfs.misc.arcstats.l2_size: 2333799247360
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 59918
kstat.zfs.misc.arcstats.l2_evict_l1cached: 15088233
kstat.zfs.misc.arcstats.l2_evict_reading: 6
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 12711
kstat.zfs.misc.arcstats.l2_writes_lock_retry: 13517
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_done: 200332
kstat.zfs.misc.arcstats.l2_writes_sent: 200332
kstat.zfs.misc.arcstats.l2_write_bytes: 10215244741120
kstat.zfs.misc.arcstats.l2_read_bytes: 937862768640
kstat.zfs.misc.arcstats.l2_rw_clash: 3
kstat.zfs.misc.arcstats.l2_feeds: 202006
kstat.zfs.misc.arcstats.l2_misses: 89147711
kstat.zfs.misc.arcstats.l2_hits: 47213167
kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata: 10564286464
kstat.zfs.misc.arcstats.mfu_ghost_evictable_data: 183380527104
kstat.zfs.misc.arcstats.mfu_ghost_size: 193944813568
kstat.zfs.misc.arcstats.mfu_evictable_metadata: 996560384
kstat.zfs.misc.arcstats.mfu_evictable_data: 14806487040
kstat.zfs.misc.arcstats.mfu_size: 16342947840
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata: 16929645056
kstat.zfs.misc.arcstats.mru_ghost_evictable_data: 13950877696
kstat.zfs.misc.arcstats.mru_ghost_size: 30880522752
kstat.zfs.misc.arcstats.mru_evictable_metadata: 676257792
kstat.zfs.misc.arcstats.mru_evictable_data: 182688302080
kstat.zfs.misc.arcstats.mru_size: 195040431104
kstat.zfs.misc.arcstats.anon_evictable_metadata: 0
kstat.zfs.misc.arcstats.anon_evictable_data: 0
kstat.zfs.misc.arcstats.anon_size: 5111551488
kstat.zfs.misc.arcstats.other_size: 140100096
kstat.zfs.misc.arcstats.bonus_size: 4313280
kstat.zfs.misc.arcstats.dnode_size: 37518672
kstat.zfs.misc.arcstats.dbuf_size: 98268144
kstat.zfs.misc.arcstats.metadata_size: 2503537664
kstat.zfs.misc.arcstats.data_size: 213991785984
kstat.zfs.misc.arcstats.hdr_size: 3989228576
kstat.zfs.misc.arcstats.overhead_size: 9331061248
kstat.zfs.misc.arcstats.uncompressed_size: 300677985280
kstat.zfs.misc.arcstats.compressed_size: 207164393472
kstat.zfs.misc.arcstats.size: 226383571488
kstat.zfs.misc.arcstats.c_max: 225928233984
kstat.zfs.misc.arcstats.c_min: 8584595584
kstat.zfs.misc.arcstats.c: 225928233984
kstat.zfs.misc.arcstats.p: 199570979264
kstat.zfs.misc.arcstats.hash_chain_max: 16
kstat.zfs.misc.arcstats.hash_chains: 22158939
kstat.zfs.misc.arcstats.hash_collisions: 860458201
kstat.zfs.misc.arcstats.hash_elements_max: 80546935
kstat.zfs.misc.arcstats.hash_elements: 75919965
kstat.zfs.misc.arcstats.evict_l2_skip: 0
kstat.zfs.misc.arcstats.evict_l2_ineligible: 291833958400
kstat.zfs.misc.arcstats.evict_l2_eligible: 50089597440
kstat.zfs.misc.arcstats.evict_l2_cached: 13153210230272
kstat.zfs.misc.arcstats.evict_not_enough: 11154
kstat.zfs.misc.arcstats.evict_skip: 511911
kstat.zfs.misc.arcstats.access_skip: 62
kstat.zfs.misc.arcstats.mutex_miss: 6418
kstat.zfs.misc.arcstats.deleted: 385567094
kstat.zfs.misc.arcstats.mfu_ghost_hits: 12547988
kstat.zfs.misc.arcstats.mfu_hits: 2999554318
kstat.zfs.misc.arcstats.mru_ghost_hits: 5548402
kstat.zfs.misc.arcstats.mru_hits: 194028073
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 417199
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 532255
kstat.zfs.misc.arcstats.prefetch_data_misses: 50161687
kstat.zfs.misc.arcstats.prefetch_data_hits: 15106282
kstat.zfs.misc.arcstats.demand_metadata_misses: 4815824
kstat.zfs.misc.arcstats.demand_metadata_hits: 2956607809
kstat.zfs.misc.arcstats.demand_data_misses: 80973255
kstat.zfs.misc.arcstats.demand_data_hits: 230234305
kstat.zfs.misc.arcstats.misses: 136367965
kstat.zfs.misc.arcstats.hits: 3202480652
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Your ARC hitrate is ~96% and it's pretty much all from the MFU side so your churn should indeed be settled down, your L2ARC is sitting at 1.5T or so used (after compression) and it's turning in a ~35% hit rate so that's still got value. Your prefetch hit rate though is poor at 23%.

Your tunables also have you trying to scan the last 1G of your ARC (l2arc_write_max * headroom) every second (l2arc_feed_secs) - I'd be interested to see if I could figure out a dtrace to see how much time/memory bandwidth is being used on l2arc_feed_thread() and potentially reduce that.

I would consider dialing back the l2arc_write tunables, especially now that persistent L2ARC is available (set sysctl vfs.zfs.l2arc.rebuild_enabled=1)

Do you have a comparison set of values from another server?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
i'm not worried about the persistent arc a whole lot. we never reboot with data on them. we always migrate off.
my log drives are 2 rms-200's, for meta I have 2 800gb p3700 and l2arc is a 2tb p3700
 
Top