ARC Size stays put at 112.5GB despite system having 256GB of Memory

Status
Not open for further replies.

vrod

Dabbler
Joined
Mar 14, 2016
Messages
39
Hello all,

So yeah, the title pretty much says it all. So far the system works excellent but I'm trying to figure out why the ARC size has limited itself to 112.5GB. Looking at the graph, it has pretty much flatlined at that size for the last day, though I have been doing about 300-400GB of file transfers. I have no L2ARC configured.

I am running the latest FreeNAS Stable (11.1-U2) without autotune enabled.

I ran the arc_summary.py script and it shows me this information:

Code:
System Memory:

		0.07%   171.90  MiB Active,	 0.33%   839.29  MiB Inact
		50.28%  125.45  GiB Wired,	  0.00%   0	   Bytes Cache
		49.33%  123.08  GiB Free,	   0.00%   0	   Bytes Gap

		Real Installed:						 256.00  GiB
		Real Available:				 99.97%  255.93  GiB
		Real Managed:				   97.49%  249.52  GiB

		Logical Total:						  256.00  GiB
		Logical Used:				   51.60%  132.10  GiB
		Logical Free:				   48.40%  123.90  GiB

Kernel Memory:								  1.86	GiB
		Data:						   97.99%  1.82	GiB
		Text:						   2.01%   38.24   MiB

Kernel Memory Map:							  249.52  GiB
		Size:						   2.63%   6.57	GiB
		Free:						   97.37%  242.95  GiB
																Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
		Storage pool Version:				   5000
		Filesystem Version:					 5
		Memory Throttle Count:				  0

ARC Misc:
		Deleted:								247
		Mutex Misses:						   0
		Evict Skips:							0

ARC Size:							   45.26%  112.47  GiB
		Target Size: (Adaptive)		 100.00% 248.52  GiB
		Min Size (Hard Limit):		  12.50%  31.06   GiB
		Max Size (High Water):		  8:1	 248.52  GiB

ARC Size Breakdown:
		Recently Used Cache Size:	   50.00%  124.26  GiB
		Frequently Used Cache Size:	 50.00%  124.26  GiB

ARC Hash Breakdown:
		Elements Max:						   2.68m
		Elements Current:			   100.00% 2.68m
		Collisions:							 341.75k
		Chain Max:							  4
		Chains:								 102.62k
																Page:  2
------------------------------------------------------------------------

ARC Total accesses:									 2.85m
		Cache Hit Ratio:				94.27%  2.69m
		Cache Miss Ratio:			   5.73%   163.34k
		Actual Hit Ratio:			   79.48%  2.27m

		Data Demand Efficiency:		 98.95%  1.77m
		Data Prefetch Efficiency:	   99.93%  462.19k

		CACHE HITS BY CACHE LIST:
		  Anonymously Used:			 15.69%  421.90k
		  Most Recently Used:		   49.06%  1.32m
		  Most Frequently Used:		 35.24%  947.37k
		  Most Recently Used Ghost:	 0.00%   0
		  Most Frequently Used Ghost:   0.00%   0

		CACHE HITS BY DATA TYPE:
		  Demand Data:				  65.00%  1.75m
		  Prefetch Data:				17.18%  461.89k
		  Demand Metadata:			  17.43%  468.45k
		  Prefetch Metadata:			0.40%   10.65k

		CACHE MISSES BY DATA TYPE:
		  Demand Data:				  11.38%  18.59k
		  Prefetch Data:				0.19%   305
		  Demand Metadata:			  87.73%  143.29k
		  Prefetch Metadata:			0.70%   1.15k
																Page:  3
------------------------------------------------------------------------

																Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:						18.43m
		Hit Ratio:					  60.69%  11.18m
		Miss Ratio:					 39.31%  7.24m

																Page:  5
------------------------------------------------------------------------

																Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
		kern.maxusers						   16715
		vm.kmem_size							267917467648
		vm.kmem_size_scale					  1
		vm.kmem_size_min						0
		vm.kmem_size_max						1319413950874
		vfs.zfs.vol.immediate_write_sz		  32768
		vfs.zfs.vol.unmap_sync_enabled		  0
		vfs.zfs.vol.unmap_enabled			   1
		vfs.zfs.vol.recursive				   0
		vfs.zfs.vol.mode						2
		vfs.zfs.sync_pass_rewrite			   2
		vfs.zfs.sync_pass_dont_compress		 5
		vfs.zfs.sync_pass_deferred_free		 2
		vfs.zfs.zio.dva_throttle_enabled		1
		vfs.zfs.zio.exclude_metadata			0
		vfs.zfs.zio.use_uma					 1
		vfs.zfs.zil_slog_bulk				   786432
		vfs.zfs.cache_flush_disable			 0
		vfs.zfs.zil_replay_disable			  0
		vfs.zfs.version.zpl					 5
		vfs.zfs.version.spa					 5000
		vfs.zfs.version.acl					 1
		vfs.zfs.version.ioctl				   7
		vfs.zfs.debug						   0
		vfs.zfs.super_owner					 0
		vfs.zfs.immediate_write_sz			  32768
		vfs.zfs.min_auto_ashift				 12
		vfs.zfs.max_auto_ashift				 13
		vfs.zfs.vdev.queue_depth_pct			1000
		vfs.zfs.vdev.write_gap_limit			4096
		vfs.zfs.vdev.read_gap_limit			 32768
		vfs.zfs.vdev.aggregation_limit		  1048576
		vfs.zfs.vdev.trim_max_active			64
		vfs.zfs.vdev.trim_min_active			1
		vfs.zfs.vdev.scrub_max_active		   2
		vfs.zfs.vdev.scrub_min_active		   1
		vfs.zfs.vdev.async_write_max_active	 10
		vfs.zfs.vdev.async_write_min_active	 1
		vfs.zfs.vdev.async_read_max_active	  3
		vfs.zfs.vdev.async_read_min_active	  1
		vfs.zfs.vdev.sync_write_max_active	  10
		vfs.zfs.vdev.sync_write_min_active	  10
		vfs.zfs.vdev.sync_read_max_active	   10
		vfs.zfs.vdev.sync_read_min_active	   10
		vfs.zfs.vdev.max_active				 1000
		vfs.zfs.vdev.async_write_active_max_dirty_percent60
		vfs.zfs.vdev.async_write_active_min_dirty_percent30
		vfs.zfs.vdev.mirror.non_rotating_seek_inc1
		vfs.zfs.vdev.mirror.non_rotating_inc	0
		vfs.zfs.vdev.mirror.rotating_seek_offset1048576
		vfs.zfs.vdev.mirror.rotating_seek_inc   5
		vfs.zfs.vdev.mirror.rotating_inc		0
		vfs.zfs.vdev.trim_on_init			   1
		vfs.zfs.vdev.bio_delete_disable		 0
		vfs.zfs.vdev.bio_flush_disable		  0
		vfs.zfs.vdev.cache.bshift			   16
		vfs.zfs.vdev.cache.size				 0
		vfs.zfs.vdev.cache.max				  16384
		vfs.zfs.vdev.metaslabs_per_vdev		 200
		vfs.zfs.vdev.trim_max_pending		   10000
		vfs.zfs.txg.timeout					 5
		vfs.zfs.trim.enabled					1
		vfs.zfs.trim.max_interval			   1
		vfs.zfs.trim.timeout					30
		vfs.zfs.trim.txg_delay				  32
		vfs.zfs.space_map_blksz				 4096
		vfs.zfs.spa_min_slop					134217728
		vfs.zfs.spa_slop_shift				  5
		vfs.zfs.spa_asize_inflation			 24
		vfs.zfs.deadman_enabled				 1
		vfs.zfs.deadman_checktime_ms			5000
		vfs.zfs.deadman_synctime_ms			 1000000
		vfs.zfs.debug_flags					 0
		vfs.zfs.debugflags					  0
		vfs.zfs.recover						 0
		vfs.zfs.spa_load_verify_data			1
		vfs.zfs.spa_load_verify_metadata		1
		vfs.zfs.spa_load_verify_maxinflight	 10000
		vfs.zfs.ccw_retry_interval			  300
		vfs.zfs.check_hostid					1
		vfs.zfs.mg_fragmentation_threshold	  85
		vfs.zfs.mg_noalloc_threshold			0
		vfs.zfs.condense_pct					200
		vfs.zfs.metaslab.bias_enabled		   1
		vfs.zfs.metaslab.lba_weighting_enabled  1
		vfs.zfs.metaslab.fragmentation_factor_enabled1
		vfs.zfs.metaslab.preload_enabled		1
		vfs.zfs.metaslab.preload_limit		  3
		vfs.zfs.metaslab.unload_delay		   8
		vfs.zfs.metaslab.load_pct			   50
		vfs.zfs.metaslab.min_alloc_size		 33554432
		vfs.zfs.metaslab.df_free_pct			4
		vfs.zfs.metaslab.df_alloc_threshold	 131072
		vfs.zfs.metaslab.debug_unload		   0
		vfs.zfs.metaslab.debug_load			 0
		vfs.zfs.metaslab.fragmentation_threshold70
		vfs.zfs.metaslab.gang_bang			  16777217
		vfs.zfs.free_bpobj_enabled			  1
		vfs.zfs.free_max_blocks				 18446744073709551615
		vfs.zfs.zfs_scan_checkpoint_interval	7200
		vfs.zfs.zfs_scan_legacy				 0
		vfs.zfs.no_scrub_prefetch			   0
		vfs.zfs.no_scrub_io					 0
		vfs.zfs.resilver_min_time_ms			3000
		vfs.zfs.free_min_time_ms				1000
		vfs.zfs.scan_min_time_ms				1000
		vfs.zfs.scan_idle					   50
		vfs.zfs.scrub_delay					 4
		vfs.zfs.resilver_delay				  2
		vfs.zfs.top_maxinflight				 32
		vfs.zfs.delay_scale					 500000
		vfs.zfs.delay_min_dirty_percent		 60
		vfs.zfs.dirty_data_sync				 67108864
		vfs.zfs.dirty_data_max_percent		  10
		vfs.zfs.dirty_data_max_max			  4294967296
		vfs.zfs.dirty_data_max				  4294967296
		vfs.zfs.max_recordsize				  1048576
		vfs.zfs.default_ibs					 17
		vfs.zfs.default_bs					  9
		vfs.zfs.zfetch.array_rd_sz			  1048576
		vfs.zfs.zfetch.max_idistance			67108864
		vfs.zfs.zfetch.max_distance			 8388608
		vfs.zfs.zfetch.min_sec_reap			 2
		vfs.zfs.zfetch.max_streams			  8
		vfs.zfs.prefetch_disable				0
		vfs.zfs.send_holes_without_birth_time   1
		vfs.zfs.mdcomp_disable				  0
		vfs.zfs.per_txg_dirty_frees_percent	 30
		vfs.zfs.nopwrite_enabled				1
		vfs.zfs.dedup.prefetch				  1
		vfs.zfs.arc_min_prescient_prefetch_ms   6
		vfs.zfs.arc_min_prfetch_ms			  1
		vfs.zfs.l2c_only_size				   0
		vfs.zfs.mfu_ghost_data_esize			0
		vfs.zfs.mfu_ghost_metadata_esize		0
		vfs.zfs.mfu_ghost_size				  0
		vfs.zfs.mfu_data_esize				  9501277696
		vfs.zfs.mfu_metadata_esize			  48141824
		vfs.zfs.mfu_size						9562194432
		vfs.zfs.mru_ghost_data_esize			0
		vfs.zfs.mru_ghost_metadata_esize		0
		vfs.zfs.mru_ghost_size				  0
		vfs.zfs.mru_data_esize				  110184448000
		vfs.zfs.mru_metadata_esize			  247648768
		vfs.zfs.mru_size						110568468992
		vfs.zfs.anon_data_esize				 0
		vfs.zfs.anon_metadata_esize			 0
		vfs.zfs.anon_size					   52961280
		vfs.zfs.l2arc_norw					  1
		vfs.zfs.l2arc_feed_again				1
		vfs.zfs.l2arc_noprefetch				1
		vfs.zfs.l2arc_feed_min_ms			   200
		vfs.zfs.l2arc_feed_secs				 1
		vfs.zfs.l2arc_headroom				  2
		vfs.zfs.l2arc_write_boost			   8388608
		vfs.zfs.l2arc_write_max				 8388608
		vfs.zfs.arc_meta_limit				  66710931456
		vfs.zfs.arc_free_target				 453508
		vfs.zfs.compressed_arc_enabled		  1
		vfs.zfs.arc_grow_retry				  60
		vfs.zfs.arc_shrink_shift				7
		vfs.zfs.arc_average_blocksize		   8192
		vfs.zfs.arc_no_grow_shift			   5
		vfs.zfs.arc_min						 33355465728
		vfs.zfs.arc_max						 266843725824
																Page:  7
------------------------------------------------------------------------


Anyone who might have some tips?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
It's leveled out because you're getting a 94% hit ratio with 112GB ARC. I wouldn't worry about it. The ARC will grow as it needs to.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
it has pretty much flatlined at that size for the last day
Is the system malfunctioning in some way? Everything you are showing looks like it is working perfectly.
 
Status
Not open for further replies.
Top