Cache Hit Ratio Question

Status
Not open for further replies.

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Hi all,

I am trying to figure out why a new server i have configured is getting a hit ratio of around 53% and what i could do to improve it. Please note FreeNAS is virtualised (i know this isn't recommended).

System specs:
  • Server RAX XS8-2260
  • 2 vCPU
  • 48GB RAM
  • 28TB pool in mirror
  • Freenas installed onto a Sata DOM and mirrored
  • PCIE SSD 400gb set as slog
  • Intel X550T Dual 10gb NIC
  • Server is used to store various types of files, documents, zip files, videos of varying sizes and accessed by upto 30 people at random times
  • Hosts VMs and uses NFS
In the last 2 weeks i have spun up 3 similarly specced servers, the average ratios on them are as follows: this one 53%, second host 63% and third is around 94%.
I have double checked configuration and all are the same.
arc_summary results
Code:
root@FREENAS1LAF:~ # arc_summary.py
System Memory:

		0.06%   26.62   MiB Active,	 1.16%   557.04  MiB Inact
		69.87%  32.66   GiB Wired,	  0.00%   0	   Bytes Cache
		28.84%  13.48   GiB Free,	   0.07%   31.99   MiB Gap

		Real Installed:						 48.00   GiB
		Real Available:				 99.92%  47.96   GiB
		Real Managed:				   97.46%  46.74   GiB

		Logical Total:						  48.00   GiB
		Logical Used:				   70.78%  33.97   GiB
		Logical Free:				   29.22%  14.03   GiB

Kernel Memory:								  370.00  MiB
		Data:						   89.65%  331.72  MiB
		Text:						   10.35%  38.28   MiB

Kernel Memory Map:							  46.74   GiB
		Size:						   67.61%  31.61   GiB
		Free:						   32.39%  15.14   GiB
																Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
		Storage pool Version:				   5000
		Filesystem Version:					 5
		Memory Throttle Count:				  0

ARC Misc:
		Deleted:								7.64m
		Mutex Misses:						   4.33k
		Evict Skips:							4.33k

ARC Size:							   67.64%  30.94   GiB
		Target Size: (Adaptive)		 92.43%  42.28   GiB
		Min Size (Hard Limit):		  12.50%  5.72	GiB
		Max Size (High Water):		  8:1	 45.74   GiB

ARC Size Breakdown:
		Recently Used Cache Size:	   80.00%  33.83   GiB
		Frequently Used Cache Size:	 20.00%  8.46	GiB

ARC Hash Breakdown:
		Elements Max:						   891.08k
		Elements Current:			   48.98%  436.47k
		Collisions:							 1.22m
		Chain Max:							  4
		Chains:								 11.26k
																Page:  2
------------------------------------------------------------------------

ARC Total accesses:									 19.46m
		Cache Hit Ratio:				53.76%  10.46m
		Cache Miss Ratio:			   46.24%  9.00m
		Actual Hit Ratio:			   47.71%  9.28m

		Data Demand Efficiency:		 64.57%  12.96m
		Data Prefetch Efficiency:	   24.73%  5.69m

		CACHE HITS BY CACHE LIST:
		  Anonymously Used:			 3.56%   372.58k
		  Most Recently Used:		   71.02%  7.43m
		  Most Frequently Used:		 17.73%  1.85m
		  Most Recently Used Ghost:	 4.42%   462.83k
		  Most Frequently Used Ghost:   3.27%   341.89k

		CACHE HITS BY DATA TYPE:
		  Demand Data:				  80.02%  8.37m
		  Prefetch Data:				13.46%  1.41m
		  Demand Metadata:			  6.30%   658.58k
		  Prefetch Metadata:			0.22%   22.93k

		CACHE MISSES BY DATA TYPE:
		  Demand Data:				  51.04%  4.59m
		  Prefetch Data:				47.64%  4.29m
		  Demand Metadata:			  1.27%   114.26k
		  Prefetch Metadata:			0.05%   4.61k
																Page:  3
------------------------------------------------------------------------

																Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:						98.80m
		Hit Ratio:					  9.47%   9.36m
		Miss Ratio:					 90.53%  89.44m

																Page:  5
------------------------------------------------------------------------

																Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
		kern.maxusers						   3405
		vm.kmem_size							50191597568
		vm.kmem_size_scale					  1
		vm.kmem_size_min						0
		vm.kmem_size_max						1319413950874
		vfs.zfs.vol.immediate_write_sz		  32768
		vfs.zfs.vol.unmap_enabled			   1
		vfs.zfs.vol.recursive				   0
		vfs.zfs.vol.mode						2
		vfs.zfs.sync_pass_rewrite			   2
		vfs.zfs.sync_pass_dont_compress		 5
		vfs.zfs.sync_pass_deferred_free		 2
		vfs.zfs.zio.dva_throttle_enabled		1
		vfs.zfs.zio.exclude_metadata			0
		vfs.zfs.zio.use_uma					 1
		vfs.zfs.zil_slog_limit				  786432
		vfs.zfs.cache_flush_disable			 0
		vfs.zfs.zil_replay_disable			  0
		vfs.zfs.version.zpl					 5
		vfs.zfs.version.spa					 5000
		vfs.zfs.version.acl					 1
		vfs.zfs.version.ioctl				   7
		vfs.zfs.debug						   0
		vfs.zfs.super_owner					 0
		vfs.zfs.immediate_write_sz			  32768
		vfs.zfs.min_auto_ashift				 12
		vfs.zfs.max_auto_ashift				 13
		vfs.zfs.vdev.queue_depth_pct			1000
		vfs.zfs.vdev.write_gap_limit			4096
		vfs.zfs.vdev.read_gap_limit			 32768
		vfs.zfs.vdev.aggregation_limit		  131072
		vfs.zfs.vdev.trim_max_active			64
		vfs.zfs.vdev.trim_min_active			1
		vfs.zfs.vdev.scrub_max_active		   2
		vfs.zfs.vdev.scrub_min_active		   1
		vfs.zfs.vdev.async_write_max_active	 10
		vfs.zfs.vdev.async_write_min_active	 1
		vfs.zfs.vdev.async_read_max_active	  3
		vfs.zfs.vdev.async_read_min_active	  1
		vfs.zfs.vdev.sync_write_max_active	  10
		vfs.zfs.vdev.sync_write_min_active	  10
		vfs.zfs.vdev.sync_read_max_active	   10
		vfs.zfs.vdev.sync_read_min_active	   10
		vfs.zfs.vdev.max_active				 1000
		vfs.zfs.vdev.async_write_active_max_dirty_percent60
		vfs.zfs.vdev.async_write_active_min_dirty_percent30
		vfs.zfs.vdev.mirror.non_rotating_seek_inc1
		vfs.zfs.vdev.mirror.non_rotating_inc	0
		vfs.zfs.vdev.mirror.rotating_seek_offset1048576
		vfs.zfs.vdev.mirror.rotating_seek_inc   5
		vfs.zfs.vdev.mirror.rotating_inc		0
		vfs.zfs.vdev.trim_on_init			   1
		vfs.zfs.vdev.bio_delete_disable		 0
		vfs.zfs.vdev.bio_flush_disable		  0
		vfs.zfs.vdev.cache.bshift			   16
		vfs.zfs.vdev.cache.size				 0
		vfs.zfs.vdev.cache.max				  16384
		vfs.zfs.vdev.metaslabs_per_vdev		 200
		vfs.zfs.vdev.trim_max_pending		   10000
		vfs.zfs.txg.timeout					 5
		vfs.zfs.trim.enabled					1
		vfs.zfs.trim.max_interval			   1
		vfs.zfs.trim.timeout					30
		vfs.zfs.trim.txg_delay				  32
		vfs.zfs.space_map_blksz				 4096
		vfs.zfs.spa_min_slop					134217728
		vfs.zfs.spa_slop_shift				  5
		vfs.zfs.spa_asize_inflation			 24
		vfs.zfs.deadman_enabled				 0
		vfs.zfs.deadman_checktime_ms			5000
		vfs.zfs.deadman_synctime_ms			 1000000
		vfs.zfs.debug_flags					 0
		vfs.zfs.debugflags					  0
		vfs.zfs.recover						 0
		vfs.zfs.spa_load_verify_data			1
		vfs.zfs.spa_load_verify_metadata		1
		vfs.zfs.spa_load_verify_maxinflight	 10000
		vfs.zfs.ccw_retry_interval			  300
		vfs.zfs.check_hostid					1
		vfs.zfs.mg_fragmentation_threshold	  85
		vfs.zfs.mg_noalloc_threshold			0
		vfs.zfs.condense_pct					200
		vfs.zfs.metaslab.bias_enabled		   1
		vfs.zfs.metaslab.lba_weighting_enabled  1
		vfs.zfs.metaslab.fragmentation_factor_enabled1
		vfs.zfs.metaslab.preload_enabled		1
		vfs.zfs.metaslab.preload_limit		  3
		vfs.zfs.metaslab.unload_delay		   8
		vfs.zfs.metaslab.load_pct			   50
		vfs.zfs.metaslab.min_alloc_size		 33554432
		vfs.zfs.metaslab.df_free_pct			4
		vfs.zfs.metaslab.df_alloc_threshold	 131072
		vfs.zfs.metaslab.debug_unload		   0
		vfs.zfs.metaslab.debug_load			 0
		vfs.zfs.metaslab.fragmentation_threshold70
		vfs.zfs.metaslab.gang_bang			  16777217
		vfs.zfs.free_bpobj_enabled			  1
		vfs.zfs.free_max_blocks				 18446744073709551615
		vfs.zfs.no_scrub_prefetch			   0
		vfs.zfs.no_scrub_io					 0
		vfs.zfs.resilver_min_time_ms			3000
		vfs.zfs.free_min_time_ms				1000
		vfs.zfs.scan_min_time_ms				1000
		vfs.zfs.scan_idle					   50
		vfs.zfs.scrub_delay					 4
		vfs.zfs.resilver_delay				  2
		vfs.zfs.top_maxinflight				 32
		vfs.zfs.delay_scale					 500000
		vfs.zfs.delay_min_dirty_percent		 60
		vfs.zfs.dirty_data_sync				 67108864
		vfs.zfs.dirty_data_max_percent		  10
		vfs.zfs.dirty_data_max_max			  4294967296
		vfs.zfs.dirty_data_max				  4294967296
		vfs.zfs.max_recordsize				  1048576
		vfs.zfs.zfetch.array_rd_sz			  1048576
		vfs.zfs.zfetch.max_idistance			67108864
		vfs.zfs.zfetch.max_distance			 8388608
		vfs.zfs.zfetch.min_sec_reap			 2
		vfs.zfs.zfetch.max_streams			  8
		vfs.zfs.prefetch_disable				0
		vfs.zfs.send_holes_without_birth_time   1
		vfs.zfs.mdcomp_disable				  0
		vfs.zfs.nopwrite_enabled				1
		vfs.zfs.dedup.prefetch				  1
		vfs.zfs.l2c_only_size				   0
		vfs.zfs.mfu_ghost_data_esize			13137477632
		vfs.zfs.mfu_ghost_metadata_esize		0
		vfs.zfs.mfu_ghost_size				  13137477632
		vfs.zfs.mfu_data_esize				  6645604864
		vfs.zfs.mfu_metadata_esize			  161513472
		vfs.zfs.mfu_size						6817051648
		vfs.zfs.mru_ghost_data_esize			4865523712
		vfs.zfs.mru_ghost_metadata_esize		0
		vfs.zfs.mru_ghost_size				  4865523712
		vfs.zfs.mru_data_esize				  25995336192
		vfs.zfs.mru_metadata_esize			  143064064
		vfs.zfs.mru_size						26296689664
		vfs.zfs.anon_data_esize				 0
		vfs.zfs.anon_metadata_esize			 0
		vfs.zfs.anon_size					   1259008
		vfs.zfs.l2arc_norw					  1
		vfs.zfs.l2arc_feed_again				1
		vfs.zfs.l2arc_noprefetch				1
		vfs.zfs.l2arc_feed_min_ms			   200
		vfs.zfs.l2arc_feed_secs				 1
		vfs.zfs.l2arc_headroom				  2
		vfs.zfs.l2arc_write_boost			   8388608
		vfs.zfs.l2arc_write_max				 8388608
		vfs.zfs.arc_meta_limit				  12279463936
		vfs.zfs.arc_free_target				 84997
		vfs.zfs.compressed_arc_enabled		  1
		vfs.zfs.arc_shrink_shift				7
		vfs.zfs.arc_average_blocksize		   8192
		vfs.zfs.arc_min						 6139731968
		vfs.zfs.arc_max						 49117855744



Happy to provide any additional information required.

Cheers
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
53% hits is great! most people won't even come close to that.

Your cache will also get better or worse the more you use it. It probably has to run for awhile before the numbers are meaningful. What version of freenas are you running?
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
53% hits is great! most people won't even come close to that.

Your cache will also get better or worse the more you use it. It probably has to run for awhile before the numbers are meaningful. What version of freenas are you running?

Oh! I was looking everywhere and it said aim for high 90's!

I am running 11-Nightlies.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I have 128GB of memory and do random reads because my server is mostly a media server and my hit ratio is:
Code:
Cache Hit Ratio:				65.29%  96.90m
Cache Miss Ratio:			   34.71%  51.52m
Actual Hit Ratio:			   53.30%  79.11m


How is your performance? Any complaints, if not then ignore the number and go on your marry way.
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
How is your performance? Any complaints, if not then ignore the number and go on your marry way.

From my testing, reading and writing is decent, maxes out our 1gb connections from desktops. Browsing to the server can take a little time, but that could also be DNS performance, it is better after the server has already been browsed to previously.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I was looking everywhere and it said aim for high 90's!
There was a longstanding bug in how the ARC hit ratio was calculated that resulted in artificially high numbers. Once it was fixed, the number went way down--I run under 15%, and that's with 128 GB of RAM.
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
There was a longstanding bug in how the ARC hit ratio was calculated that resulted in artificially high numbers. Once it was fixed, the number went way down--I run under 15%, and that's with 128 GB of RAM.

Wow...ok, starting to feel a lot better, thanks!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I assume the cache usage varies because the working set varies.

53% sounds about normal.

You could add some more ram and/or l2arc.

I see between 50-80% on my servers.
 
Status
Not open for further replies.
Top