Very low ARC Hit Ratio

Status
Not open for further replies.

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
So I know that I really need to wait longer than a few days to get an accurate reading, but before I upgraded my server (from a G3250 and 8gb non-ECC memory) I was consistently sitting around 80% (even after rebooting). Now that I've upgraded to a Xeon and 32gb of ECC memory I've been at around 70%. I have 14TB of usable storage (15TB in total) across six drives -- two 1tb drives in mirror'd, one 1tb drive and three 4tb drives independent (no RAID). I'm mostly storing media that I stream from (although I haven't streamed other than a few tests recently).

I feel like there has to be some issue and not that it just needs more time to settle.

http://imgur.com/a/L9SmJ
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Have you enabled autotune by chance?
This?
mfDbddy.png

(if so I don't recall doing anything that would have affected anything here)
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Yes, are those settings from the older system? I think the "vm.kmem_size" looks small.
Yeah, I reused the same config as the old system.

Is there a way to automatically update all those settings?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Turn off auto-tune and remove any parameters it has set. You shouldn't turn on auto-tune unless you're running into three digit GB RAM and multiple TB territory.
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Turn off auto-tune and remove any parameters it has set. You shouldn't turn on auto-tune unless you're running into three digit GB RAM and multiple TB territory.
So I should disable all the things under tunables?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
So I should disable all the things under tunables?
Sure, I would recommend first disable autotune, then have it remove/delete each setting listed. Next, reboot and see how performance is. If you so desire, then you can enable autotune once more and have it reboot to see if there is any performance increase.

Autotune is a "set once" thing, it will set the configurations based on the current hardware it detects. So no harm in clearing it and having it re-apply especially when you have upgraded the hardware (MB, CPU and RAM).

Sent from my ASUS Transformer Pad TF700T using Tapatalk
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Sure, I would recommend first disable autotune, then have it remove/delete each setting listed. Next, reboot and see how performance is. If you so desire, then you can enable autotune once more and have it reboot to see if there is any performance increase.

Autotune is a "set once" thing, it will set the configurations based on the current hardware it detects. So no harm in clearing it and having it re-apply especially when you have upgraded the hardware (MB, CPU and RAM).

Sent from my ASUS Transformer Pad TF700T using Tapatalk
I already disabled auto tune, but it didn't clear out the tunables section -- which is why I asked. If I restart will it automatically clear out that section (or do I have to do it manually)?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I already disabled auto tune, but it didn't clear out the tunables section -- which is why I asked. If I restart will it automatically clear out that section (or do I have to do it manually)?
You have to select each of the ones listed in the GUI and then have it deleted/removed. They will not be removed automatically.

Sent from my ASUS Transformer Pad TF700T using Tapatalk
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
It looks like you have zfs.arc_max set to about 4.6GB, which is going to hurt your hit ratio.
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
OP, just checking back to see how things worked out?
I didn't reenable autotune, and after about a day and a half I'm sitting at 75%.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Run arc_summary.py from the command line to see how your ARC is doing (plus lots of other info).
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Run arc_summary.py from the command line to see how your ARC is doing (plus lots of other info).
Code:
System Memory:

1.65% 523.07 MiB Active, 1.34% 424.36 MiB Inact
14.99% 4.64 GiB Wired, 0.02% 6.68 MiB Cache
82.00% 25.41 GiB Free, 0.01% 1.80 MiB Gap

Real Installed: 32.00 GiB
Real Available: 99.84% 31.95 GiB
Real Managed: 96.99% 30.99 GiB

Logical Total: 32.00 GiB
Logical Used: 19.29% 6.17 GiB
Logical Free: 80.71% 25.83 GiB

Kernel Memory: 391.97 MiB
Data: 93.96% 368.30 MiB
Text: 6.04% 23.67 MiB

Kernel Memory Map: 30.99 GiB
Size: 8.89% 2.75 GiB
Free: 91.11% 28.23 GiB
Page: 1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Storage pool Version: 5000
Filesystem Version: 5
Memory Throttle Count: 0

ARC Misc:
Deleted: 137
Recycle Misses: 0
Mutex Misses: 0
Evict Skips: 0

ARC Size: 9.76% 2.93 GiB
Target Size: (Adaptive) 100.00% 29.99 GiB
Min Size (Hard Limit): 12.50% 3.75 GiB
Max Size (High Water): 8:1 29.99 GiB

ARC Size Breakdown:
Recently Used Cache Size: 50.00% 14.99 GiB
Frequently Used Cache Size: 50.00% 14.99 GiB

ARC Hash Breakdown:
Elements Max: 141.44k
Elements Current: 99.99% 141.43k
Collisions: 18.36k
Chain Max: 3
Chains: 2.58k
Page: 2
------------------------------------------------------------------------

ARC Total accesses: 6.55m
Cache Hit Ratio: 75.86% 4.97m
Cache Miss Ratio: 24.14% 1.58m
Actual Hit Ratio: 73.96% 4.85m

Data Demand Efficiency: 99.69% 2.30m
Data Prefetch Efficiency: 0.37% 4.37k

CACHE HITS BY CACHE LIST:
Anonymously Used: 2.51% 124.62k
Most Recently Used: 13.63% 677.93k
Most Frequently Used: 83.86% 4.17m
Most Recently Used Ghost: 0.00% 0
Most Frequently Used Ghost: 0.00% 0

CACHE HITS BY DATA TYPE:
Demand Data: 46.03% 2.29m
Prefetch Data: 0.00% 16
Demand Metadata: 51.47% 2.56m
Prefetch Metadata: 2.51% 124.61k

CACHE MISSES BY DATA TYPE:
Demand Data: 0.45% 7.15k
Prefetch Data: 0.28% 4.35k
Demand Metadata: 97.98% 1.55m
Prefetch Metadata: 1.29% 20.46k
Page: 3
------------------------------------------------------------------------

Page: 4
------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)
DMU Efficiency: 10.03m
Hit Ratio: 41.07% 4.12m
Miss Ratio: 58.93% 5.91m

Colinear: 5.91m
Hit Ratio: 0.01% 346
Miss Ratio: 99.99% 5.91m

Stride: 4.01m
Hit Ratio: 99.98% 4.01m
Miss Ratio: 0.02% 881

DMU Misc:
Reclaim: 5.91m
Successes: 0.08% 4.46k
Failures: 99.92% 5.91m

Streams: 113.57k
+Resets: 0.01% 16
-Resets: 99.99% 113.55k
Bogus: 0
Page: 5
------------------------------------------------------------------------

Page: 6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
kern.maxusers 2380
vm.kmem_size 33270939648
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 329853485875
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 0
vfs.zfs.mfu_ghost_metadata_lsize 0
vfs.zfs.mfu_ghost_size 0
vfs.zfs.mfu_data_lsize 565377024
vfs.zfs.mfu_metadata_lsize 852220416
vfs.zfs.mfu_size 1444428288
vfs.zfs.mru_ghost_data_lsize 0
vfs.zfs.mru_ghost_metadata_lsize 0
vfs.zfs.mru_ghost_size 0
vfs.zfs.mru_data_lsize 365257728
vfs.zfs.mru_metadata_lsize 48122368
vfs.zfs.mru_size 1090016768
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 229376
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 8049299456
vfs.zfs.arc_meta_used 2211718816
vfs.zfs.arc_shrink_shift 5
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 4024649728
vfs.zfs.arc_max 32197197824
vfs.zfs.dedup.prefetch 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 3430363955
vfs.zfs.free_max_blocks 131072
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.recover 0
vfs.zfs.space_map_blksz 32768
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 1
vfs.zfs.txg.timeout 5
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.larger_ashift_minimal 0
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.snapshot_list_prefetch 0
vfs.zfs.version.ioctl 4
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.use_uma 1
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 2
Page: 7
------------------------------------------------------------------------

(END)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ah, if you just rebooted in order to nuke the old autotune variables, wait like 24 hours and THEN look at it.
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Ah, if you just rebooted in order to nuke the old autotune variables, wait like 24 hours and THEN look at it.
I restarted about 36 hours ago.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Are you maybe not doing very much with your filer?
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Are you maybe not doing very much with your filer?
That's possible. I mostly just stream and I've been somewhat busy these past few days.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ARC is mostly going to do useful stuff on moderately-to-insanely busy systems. It is less beneficial elsewhere, and is not that beneficial in cases where there's a lot of totally random access to the pool.
 
Status
Not open for further replies.
Top