GUI suddenly very slow - 11.1

Status
Not open for further replies.

mcox00941

Cadet
Joined
Apr 12, 2018
Messages
5
Hello all, long time user, first time poster. We've been running FreeNAS on a production system successfully for many years We're currently running 11.1 and upgraded late last year. In the last couple of week's the GUI performance has tanked. I can log in and get the main page but some tabs time out completely or are terribly slow. For example I clicked on the 'Control Services' button about 10 minutes ago and its still trying to load. I've tried Chrome, Edge and IE with similar results. The CLI seems to work normally and the processor is bored. I tried restarting the Django and Nginx services but the result is the same. Our CIFS share seems to work fine and the VM's hosted through iSCSI also seem to be unaffected. Assuming the GUI is updating, I have the green 'Ok' light. I plan to do a full reboot at the next maintenance window but thought I would post here in case others have a better idea.

Here are some log outputs to help start off.

Code:
root@freenas:~ # zpool status
  pool: FifteenPlusSpare
 state: ONLINE
  scan: scrub repaired 0 in 1 days 04:11:43 with 0 errors on Mon Apr  9 16:11:58 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		FifteenPlusSpare								ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/03847c41-e17e-11e6-b597-002590d303d5  ONLINE	   0	 0	 0
			gptid/1f0ff78e-3bc3-11e5-b32a-002590d303d5  ONLINE	   0	 0	 0
			gptid/8ea3a2b1-56b7-11e7-b597-002590d303d5  ONLINE	   0	 0	 0
			gptid/f23ed20a-3ad5-11e5-b32a-002590d303d5  ONLINE	   0	 0	 0
			gptid/d20347f5-c7bb-11e6-b597-002590d303d5  ONLINE	   0	 0	 0
		  raidz1-1									  ONLINE	   0	 0	 0
			gptid/e96bb4fe-ed1d-11e4-94a8-002590d303d5  ONLINE	   0	 0	 0
			gptid/a4bc50e0-09af-11e3-a865-002590d303d5  ONLINE	   0	 0	 0
			gptid/a5391d13-09af-11e3-a865-002590d303d5  ONLINE	   0	 0	 0
			gptid/a5b06638-09af-11e3-a865-002590d303d5  ONLINE	   0	 0	 0
			gptid/a62af437-09af-11e3-a865-002590d303d5  ONLINE	   0	 0	 0
		  raidz1-2									  ONLINE	   0	 0	 0
			gptid/a6a61418-09af-11e3-a865-002590d303d5  ONLINE	   0	 0	 0
			gptid/23f5d396-61ae-11e7-b597-002590d303d5  ONLINE	   0	 0	 0
			gptid/56aec46a-ecaa-11e6-b597-002590d303d5  ONLINE	   0	 0	 0
			gptid/756c1e74-b8c7-11e6-a030-002590d303d5  ONLINE	   0	 0	 0
			gptid/b83a9cb8-379d-11e5-9e72-002590d303d5  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:02:49 with 0 errors on Mon Apr  9 03:48:03 2018
config:

		NAME		STATE	 READ WRITE CKSUM
		freenas-boot  ONLINE	   0	 0	 0
		  da16p2	ONLINE	   0	 0	 0

errors: No known data errors
root@freenas:~ # 


root@freenas:~ # zfs list
NAME																USED  AVAIL  REFER  MOUNTPOINT
FifteenPlusSpare												   9.73T  11.3T  5.73T  /mnt/FifteenPlusSpare
FifteenPlusSpare/.system											441M  11.3T   256K  legacy
FifteenPlusSpare/.system/configs-5728ac67cc72462f84d2bde5a422c037   287M  11.3T   287M  legacy
FifteenPlusSpare/.system/cores									 90.6M  11.3T  90.6M  legacy
FifteenPlusSpare/.system/rrd-5728ac67cc72462f84d2bde5a422c037	   230K  11.3T   230K  legacy
FifteenPlusSpare/.system/samba4									6.67M  11.3T  6.67M  legacy
FifteenPlusSpare/.system/syslog-5728ac67cc72462f84d2bde5a422c037   55.8M  11.3T  55.8M  legacy
FifteenPlusSpare/jails											  204K  11.3T   204K  /mnt/FifteenPlusSpare/jails
freenas-boot													   2.74G  4.46G	31K  none
freenas-boot/ROOT												  2.71G  4.46G	25K  none
freenas-boot/ROOT/11.1-RELEASE									 2.61G  4.46G   870M  /
freenas-boot/ROOT/9.10-STABLE-201604181743						 24.9M  4.46G   482M  /
freenas-boot/ROOT/9.10-STABLE-201605021851						 33.7M  4.46G   515M  /
freenas-boot/ROOT/9.10.1-U4										38.7M  4.46G   661M  /
freenas-boot/ROOT/Initial-Install									 1K  4.46G   496M  legacy
freenas-boot/ROOT/default										  2.74M  4.46G   497M  legacy
freenas-boot/grub												  25.8M  4.46G  6.83M  legacy
root@freenas:~ # 



last pid: 14515;  load averages:  0.98,  0.84,  0.83																		  up 97+23:17:05  15:31:34
65 processes:  2 running, 63 sleeping
CPU:  1.9% user,  0.0% nice,  5.3% system,  0.2% interrupt, 92.6% idle
Mem: 94M Active, 43M Inact, 42M Laundry, 30G Wired, 497M Free
ARC: 4325M Total, 1851M MFU, 2243M MRU, 55M Anon, 88M Header, 88M Other
	 3943M Compressed, 7124M Uncompressed, 1.81:1 Ratio
Swap: 10G Total, 1253M Used, 8986M Free, 12% Inuse, 1372K In

  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND
14513 root		  1  31	0 59288K 26164K uwait   1   0:00   5.30% dtrace
  245 root		 20  20	0   318M 22132K swread  0 416:13   0.61% python3.6
 3569 root		 13  42	0 59864K 15452K uwait   0 312:43   0.32% consul
 2767 root		  5  23	0 51136K 13336K select  1  23.4H   0.30% python3.6
 3668 root		 12  20	0  1222M  9180K nanslp  1 286:44   0.22% collectd
14512 root		  1  21	0  7112K  3720K wait	0   0:00   0.20% ksh93
14488 root		  1  20	0  7948K  3544K CPU3	3   0:00   0.14% top
 2041 root		  1 -52   r0  3520K  3584K nanslp  3  24:36   0.08% watchdogd
65519 root		  2  20	0 40132K  5748K select  0   4:53   0.03% python3.6
14508 root		  1  22	0 59288K 26164K uwait   0   0:00   0.02% dtrace
 2763 root		  1  20	0 19608K  6404K select  3  22:29   0.01% snmpd
13386 root		  1  20	0 13216K  4848K select  3   0:00   0.01% sshd
  436 root		  1  20	0   152M  6632K kqread  2   0:08   0.00% uwsgi
 2356 root		  1  20	0 12512K 12620K select  2   5:21   0.00% ntpd
 3522 root	   1122  20	0   373M 41120K usem	1 101:47   0.00% python3.6
 3564 root		 14  20	0 37688K  6048K uwait   3  19:20   0.00% consul-alerts
 5116 root		 11  20	0 32912K  3164K uwait   2   6:31   0.00% consul
 2503 root		  1  20	0 12096K  4620K select  3   4:45   0.00% proftpd
 5117 root		 11  34	0 32912K  4676K uwait   3   3:26   0.00% consul
 2557 root		  1  20	0 37092K  4920K select  3   2:24   0.00% nmbd
 2566 root		  1  20	0 85324K  4924K select  2   2:23   0.00% winbindd
35720 root		  2  20	0 23360K  2632K kqread  3   1:48   0.00% syslog-ng
 3471 nobody		1  20	0  7144K  2192K select  2   1:04   0.00% mdnsd
 2561 root		  1  20	0   170M  4376K select  2   1:01   0.00% smbd
 2573 root		  1  20	0 44664K  5936K select  1   0:47   0.00% winbindd
 4629 root		  1  48	0  6496K   648K nanslp  2   0:35   0.00% cron
 3238 root		  1  20	0 10680K	 0K nanslp  3   0:34   0.00% <smartd>
 3272 root		  1  21	0  7096K	 0K wait	1   0:19   0.00% <sh>
 5504 root		  1  20	0 87108K  4592K select  0   0:16   0.00% winbindd
 2572 root		  1  20	0   129M  4420K select  2   0:10   0.00% smbd
 2571 root		  1  20	0   128M  4464K select  2   0:10   0.00% smbd
 1687 root		  1  20	0  9172K   116K select  1   0:09   0.00% devd
 2740 root		  1  20	0 12904K  4348K select  2   0:06   0.00% sshd
  589 root		 19  20	0   201M  6984K umtxn   0   0:05   0.00% uwsgi
 5049 root		  1  52	0 79628K  4964K ttyin   2   0:03   0.00% python3.6
93964 root		  1  20	0   176M  5480K select  2   0:01   0.00% smbd
 4619 root		  1  20	0  9000K  2236K select  0   0:01   0.00% zfsd
  198 www		   1  20	0 12920K   928K kqread  3   0:00   0.00% nginx
 3568 root		  1  20	0  6344K  1836K piperd  3   0:00   0.00% daemon
95224 root		  1  20	0   176M  5520K select  0   0:00   0.00% smbd
94939 root		  1  20	0   176M  5396K select  0   0:00   0.00% smbd
 9183 root		  1  20	0 13216K  4768K select  1   0:00   0.00% sshd
13394 root		  1  20	0  7480K  2652K pause   0   0:00   0.00% csh
14515 root		  1  27	0 59288K 25868K CPU1	1   0:00   0.00% dtrace
 9203 root		  1  20	0  7480K  1852K ttyin   3   0:00   0.00% csh
14507 root		  1  21	0  7112K  3720K wait	0   0:00   0.00% ksh93
14514 root		  1  21	0  7112K  3720K wait	1   0:00   0.00% ksh93
 1912 root		  1  20	0 13532K	 0K wait	2   0:00   0.00% <syslog-ng>
 3267 root		  1  20	0  8004K  1768K select  3   0:00   0.00% rsync
 2081 root		  1  20	0  7068K  1924K select  3   0:00   0.00% ctld
 5053 root		  1  52	0  6364K  1820K ttyin   1   0:00   0.00% getty
 5050 root		  1  52	0  6364K  1820K ttyin   0   0:00   0.00% getty
 5055 root		  1  52	0  6364K  1820K ttyin   0   0:00   0.00% getty
 5054 root		  1  52	0  6364K  1820K ttyin   1   0:00   0.00% getty
 5052 root		  1  52	0  6364K  1820K ttyin   0   0:00   0.00% getty
 5051 root		  1  52	0  6364K  1820K ttyin   2   0:00   0.00% getty
 5056 root		  1  52	0  6364K  1820K ttyin   1   0:00   0.00% getty
  243 root		  1  20	0  6344K  1020K piperd  3   0:00   0.00% daemon
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
How much swap is in use?

Have a look at the reporting tabs (if you can), specifically the memory ones
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Code:
last pid: 14515;  load averages:  0.98,  0.84,  0.83																		  up 97+23:17:05  15:31:34
65 processes:  2 running, 63 sleeping
CPU:  1.9% user,  0.0% nice,  5.3% system,  0.2% interrupt, 92.6% idle
Mem: 94M Active, 43M Inact, 42M Laundry, 30G Wired, 497M Free
ARC: 4325M Total, 1851M MFU, 2243M MRU, 55M Anon, 88M Header, 88M Other
	 3943M Compressed, 7124M Uncompressed, 1.81:1 Ratio
Swap: 10G Total, 1253M Used, 8986M Free, 12% Inuse, 1372K In

It's odd you have 30G of Wired memory, but only ~4.2G of ARC. What is the other 25G doing? I agree with @Stux, you need to find out what is sucking up all that memory.

I believe 11.1 had a memory leak (apparently fixed in 11.1-U1), but I would have expected a much higher percentave of Inactive memory were that the case. But maybe not.
 

mcox00941

Cadet
Joined
Apr 12, 2018
Messages
5
I did manage to get the reporting pages to load. I'm not sure what to make of it though. It seems to have plenty of free swap.

upload_2018-4-13_9-51-51.png
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Yea, you can't conclude much from that reporting graph. Assuming you still have less than 5GB of ZFS ARC, then something is taking up all that wired memory. I think you can get an idea if you run the following from a command line, which should give you the processes using the most memory:
Code:
# top -n -o res


BTW, on the swapping, while it's probably not bad, several folks have noticed some excessive swapping in lightly loaded systems. Separate issue, but you can read about it here: https://forums.freenas.org/index.php?threads/swap-with-9-10.42749/page-5

The title says 9.10 in the title, but we've seen it in 11.0 and 11.1 as well. Essentially you can play with these values to "correct" the issue. The values that work for the "free_target" values will be system dependent.
Code:
vm.v_free_target: 32768
vfs.zfs.arc_free_target: 32768
 

mcox00941

Cadet
Joined
Apr 12, 2018
Messages
5
Thank you. So we were able to reboot the system today and its super snappy again. I took the opportunity to apply the most recent 11.1 updates (11.1-U4) in case it was that memory leak issue. I'll try to read up on how you adjust the ARC, I always thought it managed itself. Here are the stats again but obviously this is following the reboot.


Code:
Mem: 720M Active, 208M Inact, 2301M Wired, 28G Free
ARC: 1099M Total, 215M MFU, 762M MRU, 205K Anon, 18M Header, 104M Other
	 794M Compressed, 1878M Uncompressed, 2.36:1 Ratio
Swap: 10G Total, 10G Free


  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND
 4884 root		 15  20	0   207M   161M umtxn   3   0:56   0.00% uwsgi
  245 root		 21  20	0   186M   139M kqread  0   0:18   0.00% python3.6
 2585 root		  1  20	0   169M   137M select  0   0:00   0.00% smbd
 3576 root		  1  20	0   147M   115M kqread  0   0:07   0.00% uwsgi
 2595 root		  1  20	0   128M 98136K select  2   0:00   0.00% smbd
 2596 root		  1  20	0   128M 98108K select  0   0:00   0.00% smbd
 3565 root		  1  52	0   101M 84852K select  0   0:10   0.00% python3.6
 4919 root		  1  52	0 72688K 65132K ttyin   2   0:03   0.00% python3.6
 3709 root		 12  20	0   120M 39680K nanslp  3   0:01   0.00% collectd
 2792 root		  5  21	0 51392K 35268K select  0   0:08   0.68% python3.6
 8302 root		  1  22	0 59288K 26168K uwait   1   0:00   0.68% dtrace
 8298 root		  1  21	0 59288K 26168K uwait   0   0:00   0.68% dtrace
 8307 root		  1  31	0 59288K 26164K uwait   1   0:00   0.00% dtrace
 3610 root		 11  30	0 35152K 17056K uwait   2   0:01   0.00% consul
 2597 root		  1  20	0 44664K 15436K select  2   0:00   0.00% winbindd
 2590 root		  1  20	0 44664K 14944K select  0   0:00   0.00% winbindd
 2581 root		  1  20	0 37092K 13232K select  0   0:00   0.00% nmbd
 2365 root		  1  20	0 12512K 12620K select  1   0:00   0.00% ntpd
 

mcox00941

Cadet
Joined
Apr 12, 2018
Messages
5
Seems like the ARC is using it now, must take time after a reboot to decide what it wants.

Code:
root@freenas:/ # top -n -o res
last pid: 15253;  load averages:  0.86,  0.60,  0.38  up 0+00:36:09	12:07:55
58 processes:  1 running, 57 sleeping

Mem: 730M Active, 212M Inact, 28G Wired, 1904M Free
ARC: 26G Total, 12G MFU, 14G MRU, 113M Anon, 63M Header, 103M Other
	 26G Compressed, 27G Uncompressed, 1.04:1 Ratio
Swap: 10G Total, 10G Free


  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND
 4884 root		 15  20	0   215M   169M umtxn   0   1:00   0.00% uwsgi
14707 root		  1  20	0   173M   140M select  1   0:00   0.00% smbd
  245 root		 21  20	0   186M   139M kqread  0   0:24   0.00% python3.6
 2585 root		  1  20	0   169M   137M select  1   0:00   0.00% smbd
 3576 root		  1  20	0   147M   115M kqread  3   0:07   0.00% uwsgi
 2595 root		  1  20	0   128M 98136K select  1   0:00   0.00% smbd
 2596 root		  1  20	0   128M 98108K select  3   0:00   0.00% smbd
 3565 root		  1  52	0   101M 85072K select  0   0:11   0.00% python3.6
 4919 root		  1  52	0 72688K 65132K ttyin   2   0:03   0.00% python3.6
 3709 root		 12  20	0   120M 40300K nanslp  3   0:03   0.00% collectd
 2792 root		  5  23	0 51392K 35336K select  1   0:22   0.88% python3.6
15243 root		  1  21	0 59288K 26168K uwait   3   0:00   0.68% dtrace
15229 root		  1  20	0 59288K 26168K uwait   3   0:00   0.10% dtrace
15252 root		  1  32	0 59288K 26164K uwait   1   0:00   0.00% dtrace
 3610 root		 11  30	0 35280K 17548K uwait   2   0:04   0.00% consul
 2597 root		  1  20	0 44664K 15436K select  3   0:00   0.00% winbindd
 2590 root		  1  20	0 44664K 14944K select  2   0:00   0.00% winbindd
 2581 root		  1  20	0 37092K 13236K select  1   0:00   0.00% nmbd
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Yes, the ARC does adjust itself dynamically. However it can be tuned a bit. For example one can set a max (and min) value ("do not grow larger than this value, ever"). The parameters I showed above steer the dynamic operation of the arc by telling the system how much free memory to keep around (in my example "keep 32768 pages of free memory around, even if you have to adjust down below arc_max to do so"). And there are others. I wouldn't recommend playing with it too much.

I still think your issue is something taking up a bunch of memory that then limits the ARC size. You'll need to discover that those processes are should the issue continue. After you've been running a while it will be interesting to see the output of that top command again IF the ARC continues to be limited to under 6GB of memory.

You an also look at the output from the following to show you what is happening with the ARC: # arc_summary.py | more
 

mcox00941

Cadet
Joined
Apr 12, 2018
Messages
5
Thats good to know, thank you. And thank you all for your help. I'll keep an eye on that.

Code:
root@freenas:/ # arc_summary.py | more
System Memory:

		0.25%   80.60   MiB Active,	 0.50%   159.73  MiB Inact
		96.49%  30.03   GiB Wired,	  0.00%   0	   Bytes Cache
		1.19%   380.60  MiB Free,	   1.57%   498.96  MiB Gap

		Real Installed:						 32.00   GiB
		Real Available:				 99.80%  31.94   GiB
		Real Managed:				   97.45%  31.12   GiB

		Logical Total:						  32.00   GiB
		Logical Used:				   98.35%  31.47   GiB
		Logical Free:				   1.65%   540.33  MiB

Kernel Memory:								  577.11  MiB
		Data:						   93.37%  538.87  MiB
		Text:						   6.63%   38.24   MiB

Kernel Memory Map:							  31.12   GiB
		Size:						   2.35%   750.50  MiB
		Free:						   97.65%  30.39   GiB
																Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
		Storage pool Version:				   5000
		Filesystem Version:					 5
		Memory Throttle Count:				  0

ARC Misc:
		Deleted:								822.74k
		Mutex Misses:						   81
		Evict Skips:							81

ARC Size:							   92.85%  27.97   GiB
		Target Size: (Adaptive)		 92.80%  27.95   GiB
		Min Size (Hard Limit):		  12.50%  3.77	GiB
		Max Size (High Water):		  8:1	 30.12   GiB

ARC Size Breakdown:
		Recently Used Cache Size:	   54.48%  15.24   GiB
		Frequently Used Cache Size:	 45.52%  12.73   GiB

ARC Hash Breakdown:
		Elements Max:						   496.34k
		Elements Current:			   97.01%  481.50k
		Collisions:							 131.78k
		Chain Max:							  4
		Chains:								 25.59k
																Page:  2
------------------------------------------------------------------------

ARC Total accesses:									 2.11m
		Cache Hit Ratio:				65.94%  1.39m
		Cache Miss Ratio:			   34.06%  720.23k
		Actual Hit Ratio:			   64.50%  1.36m

		Data Demand Efficiency:		 87.65%  762.33k
		Data Prefetch Efficiency:	   0.60%   600.49k

		CACHE HITS BY CACHE LIST:
		  Anonymously Used:			 1.95%   27.23k
		  Most Recently Used:		   46.27%  645.13k
		  Most Frequently Used:		 51.55%  718.74k
		  Most Recently Used Ghost:	 0.10%   1.33k
		  Most Frequently Used Ghost:   0.13%   1.80k

		CACHE HITS BY DATA TYPE:
		  Demand Data:				  47.92%  668.17k
		  Prefetch Data:				0.26%   3.58k
		  Demand Metadata:			  49.85%  695.03k
		  Prefetch Metadata:			1.97%   27.45k

		CACHE MISSES BY DATA TYPE:
		  Demand Data:				  13.07%  94.15k
		  Prefetch Data:				82.88%  596.91k
		  Demand Metadata:			  2.53%   18.19k
		  Prefetch Metadata:			1.52%   10.97k
																Page:  3
------------------------------------------------------------------------

																Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:						8.32m
		Hit Ratio:					  15.01%  1.25m
		Miss Ratio:					 84.99%  7.07m

																Page:  5
------------------------------------------------------------------------

																Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
		kern.maxusers						   2379
		vm.kmem_size							33416708096
		vm.kmem_size_scale					  1
		vm.kmem_size_min						0
		vm.kmem_size_max						1319413950874
		vfs.zfs.vol.immediate_write_sz		  32768
		vfs.zfs.vol.unmap_sync_enabled		  0
		vfs.zfs.vol.unmap_enabled			   1
		vfs.zfs.vol.recursive				   0
		vfs.zfs.vol.mode						2
		vfs.zfs.sync_pass_rewrite			   2
		vfs.zfs.sync_pass_dont_compress		 5
		vfs.zfs.sync_pass_deferred_free		 2
		vfs.zfs.zio.dva_throttle_enabled		1
		vfs.zfs.zio.exclude_metadata			0
		vfs.zfs.zio.use_uma					 1
		vfs.zfs.zil_slog_bulk				   786432
		vfs.zfs.cache_flush_disable			 0
		vfs.zfs.zil_replay_disable			  0
		vfs.zfs.version.zpl					 5
		vfs.zfs.version.spa					 5000
		vfs.zfs.version.acl					 1
		vfs.zfs.version.ioctl				   7
		vfs.zfs.debug						   0
		vfs.zfs.super_owner					 0
		vfs.zfs.immediate_write_sz			  32768
		vfs.zfs.min_auto_ashift				 12
		vfs.zfs.max_auto_ashift				 13
		vfs.zfs.vdev.queue_depth_pct			1000
		vfs.zfs.vdev.write_gap_limit			4096
		vfs.zfs.vdev.read_gap_limit			 32768
		vfs.zfs.vdev.aggregation_limit		  1048576
		vfs.zfs.vdev.trim_max_active			64
		vfs.zfs.vdev.trim_min_active			1
		vfs.zfs.vdev.scrub_max_active		   2
		vfs.zfs.vdev.scrub_min_active		   1
		vfs.zfs.vdev.async_write_max_active	 10
		vfs.zfs.vdev.async_write_min_active	 1
		vfs.zfs.vdev.async_read_max_active	  3
		vfs.zfs.vdev.async_read_min_active	  1
		vfs.zfs.vdev.sync_write_max_active	  10
		vfs.zfs.vdev.sync_write_min_active	  10
		vfs.zfs.vdev.sync_read_max_active	   10
		vfs.zfs.vdev.sync_read_min_active	   10
		vfs.zfs.vdev.max_active				 1000
		vfs.zfs.vdev.async_write_active_max_dirty_percent60
		vfs.zfs.vdev.async_write_active_min_dirty_percent30
		vfs.zfs.vdev.mirror.non_rotating_seek_inc1
		vfs.zfs.vdev.mirror.non_rotating_inc	0
		vfs.zfs.vdev.mirror.rotating_seek_offset1048576
		vfs.zfs.vdev.mirror.rotating_seek_inc   5
		vfs.zfs.vdev.mirror.rotating_inc		0
		vfs.zfs.vdev.trim_on_init			   1
		vfs.zfs.vdev.bio_delete_disable		 0
		vfs.zfs.vdev.bio_flush_disable		  0
		vfs.zfs.vdev.cache.bshift			   16
		vfs.zfs.vdev.cache.size				 0
		vfs.zfs.vdev.cache.max				  16384
		vfs.zfs.vdev.metaslabs_per_vdev		 200
		vfs.zfs.vdev.trim_max_pending		   10000
		vfs.zfs.txg.timeout					 5
		vfs.zfs.trim.enabled					1
		vfs.zfs.trim.max_interval			   1
		vfs.zfs.trim.timeout					30
		vfs.zfs.trim.txg_delay				  32
		vfs.zfs.space_map_blksz				 4096
		vfs.zfs.spa_min_slop					134217728
		vfs.zfs.spa_slop_shift				  5
		vfs.zfs.spa_asize_inflation			 24
		vfs.zfs.deadman_enabled				 1
		vfs.zfs.deadman_checktime_ms			5000
		vfs.zfs.deadman_synctime_ms			 1000000
		vfs.zfs.debug_flags					 0
		vfs.zfs.debugflags					  0
		vfs.zfs.recover						 0
		vfs.zfs.spa_load_verify_data			1
		vfs.zfs.spa_load_verify_metadata		1
		vfs.zfs.spa_load_verify_maxinflight	 10000
		vfs.zfs.ccw_retry_interval			  300
		vfs.zfs.check_hostid					1
		vfs.zfs.mg_fragmentation_threshold	  85
		vfs.zfs.mg_noalloc_threshold			0
		vfs.zfs.condense_pct					200
		vfs.zfs.metaslab.bias_enabled		   1
		vfs.zfs.metaslab.lba_weighting_enabled  1
		vfs.zfs.metaslab.fragmentation_factor_enabled1
		vfs.zfs.metaslab.preload_enabled		1
		vfs.zfs.metaslab.preload_limit		  3
		vfs.zfs.metaslab.unload_delay		   8
		vfs.zfs.metaslab.load_pct			   50
		vfs.zfs.metaslab.min_alloc_size		 33554432
		vfs.zfs.metaslab.df_free_pct			4
		vfs.zfs.metaslab.df_alloc_threshold	 131072
		vfs.zfs.metaslab.debug_unload		   0
		vfs.zfs.metaslab.debug_load			 0
		vfs.zfs.metaslab.fragmentation_threshold70
		vfs.zfs.metaslab.gang_bang			  16777217
		vfs.zfs.free_bpobj_enabled			  1
		vfs.zfs.free_max_blocks				 18446744073709551615
		vfs.zfs.zfs_scan_checkpoint_interval	7200
		vfs.zfs.zfs_scan_legacy				 0
		vfs.zfs.no_scrub_prefetch			   0
		vfs.zfs.no_scrub_io					 0
		vfs.zfs.resilver_min_time_ms			3000
		vfs.zfs.free_min_time_ms				1000
		vfs.zfs.scan_min_time_ms				1000
		vfs.zfs.scan_idle					   50
		vfs.zfs.scrub_delay					 4
		vfs.zfs.resilver_delay				  2
		vfs.zfs.top_maxinflight				 32
		vfs.zfs.delay_scale					 500000
		vfs.zfs.delay_min_dirty_percent		 60
		vfs.zfs.dirty_data_sync				 67108864
		vfs.zfs.dirty_data_max_percent		  10
		vfs.zfs.dirty_data_max_max			  4294967296
		vfs.zfs.dirty_data_max				  3429136793
		vfs.zfs.max_recordsize				  1048576
		vfs.zfs.default_ibs					 17
		vfs.zfs.default_bs					  9
		vfs.zfs.zfetch.array_rd_sz			  1048576
		vfs.zfs.zfetch.max_idistance			67108864
		vfs.zfs.zfetch.max_distance			 8388608
		vfs.zfs.zfetch.min_sec_reap			 2
		vfs.zfs.zfetch.max_streams			  8
		vfs.zfs.prefetch_disable				0
		vfs.zfs.send_holes_without_birth_time   1
		vfs.zfs.mdcomp_disable				  0
		vfs.zfs.per_txg_dirty_frees_percent	 30
		vfs.zfs.nopwrite_enabled				1
		vfs.zfs.dedup.prefetch				  1
		vfs.zfs.dbuf_cache_lowater_pct		  10
		vfs.zfs.dbuf_cache_hiwater_pct		  10
		vfs.zfs.dbuf_cache_max_shift			5
		vfs.zfs.dbuf_cache_max_bytes			104857600
		vfs.zfs.arc_min_prescient_prefetch_ms   6
		vfs.zfs.arc_min_prfetch_ms			  1
		vfs.zfs.l2c_only_size				   0
		vfs.zfs.mfu_ghost_data_esize			15849540608
		vfs.zfs.mfu_ghost_metadata_esize		0
		vfs.zfs.mfu_ghost_size				  15849540608
		vfs.zfs.mfu_data_esize				  13691098624
		vfs.zfs.mfu_metadata_esize			  29028352
		vfs.zfs.mfu_size						13849648128
		vfs.zfs.mru_ghost_data_esize			14164568064
		vfs.zfs.mru_ghost_metadata_esize		0
		vfs.zfs.mru_ghost_size				  14164568064
		vfs.zfs.mru_data_esize				  15729885184
		vfs.zfs.mru_metadata_esize			  34991104
		vfs.zfs.mru_size						15928131072
		vfs.zfs.anon_data_esize				 0
		vfs.zfs.anon_metadata_esize			 0
		vfs.zfs.anon_size					   37696512
		vfs.zfs.l2arc_norw					  1
		vfs.zfs.l2arc_feed_again				1
		vfs.zfs.l2arc_noprefetch				1
		vfs.zfs.l2arc_feed_min_ms			   200
		vfs.zfs.l2arc_feed_secs				 1
		vfs.zfs.l2arc_headroom				  2
		vfs.zfs.l2arc_write_boost			   8388608
		vfs.zfs.l2arc_write_max				 8388608
		vfs.zfs.arc_meta_limit				  8085741568
		vfs.zfs.arc_free_target				 56617
		vfs.zfs.compressed_arc_enabled		  1
		vfs.zfs.arc_grow_retry				  60
		vfs.zfs.arc_shrink_shift				7
		vfs.zfs.arc_average_blocksize		   8192
		vfs.zfs.arc_no_grow_shift			   5
		vfs.zfs.arc_min						 4042870784
		vfs.zfs.arc_max						 32342966272
																Page:  7
------------------------------------------------------------------------

root@freenas:/ # 

 
Status
Not open for further replies.
Top