Questions on vdev construction after pool re-creation, and performance

Status
Not open for further replies.

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
So I broke my boot mirror of s3500 80g, added it to my pool as SLOG, offlined my P3700, here is what I got:
Code:
root@freenas:/mnt/Mirrors/test # iozone -a -s 512M -O
		Iozone: Performance Test of File I/O
				Version $Revision: 3.457 $
				Compiled for 64 bit mode.
				Build: freebsd

		Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
					 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
					 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
					 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
					 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
					 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
					 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
					 Vangel Bojaxhi, Ben England, Vikentsi Lapa,
					 Alexey Skidanov.

		Run began: Fri Nov  2 10:15:44 2018

		Auto Mode
		File size set to 524288 kB
		OPS Mode. Output is in operations per second.
		Command line used: iozone -a -s 512M -O
		Time Resolution = 0.000001 seconds.
		Processor cache size set to 1024 kBytes.
		Processor cache line size set to 32 bytes.
		File stride size set to 17 * record size.
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	 6033	 6226   341256   350759   264809	 5971   306945	  6401	272312	 6285	 6177   332459   338755
		  524288	   8	 5090	 5163   259375   271731   224323	 5048   252177	  5528	235730	 5206	 5177   240328   245175
		  524288	  16	 4065	 3977   188452   190188   181544	 4092   175242	  4197	182044	 4060	 4069   170761   173194
		  524288	  32	 2856	 2873   111675   115987   114848	 2802   119515	  2863	118071	 2851	 2824	99545   101752
		  524288	  64	 1604	 1606	59248	59696	60090	 1602	60564	  1612	 58233	 1608	 1600	53657	53995
		  524288	 128	  799	  760	32210	32432	32699	  723	32537	   797	 31928	  798	  798	27784	28038
		  524288	 256	  405	  400	17950	18148	18578	  406	16956	   404	 16754	  406	  408	13409	13496
		  524288	 512	  205	  205	 8816	 8831	 8919	  205	 9508	   205	  9531	  202	  205	 7413	 7477
		  524288	1024	  102	  102	 4485	 4498	 4564	  102	 4552	   102	  4465	  102	  102	 3466	 3478
		  524288	2048	   51	   51	 2409	 2433	 2449	   51	 2256		51	  2226	   51	   51	 1742	 1783
		  524288	4096	   25	   25	 1117	 1128	 1132	   25	 1211		25	  1116	   25	   25	  861	  872
		  524288	8192	   12	   12	  557	  560	  555	   12	  563		12	   540	   12	   12	  404	  418
		  524288   16384		6		6	  168	  169	  170		6	  166		 6	   166		6		6	   84	   85

iozone test complete.



For whatever the reason I cannot do diskinfo -wS on it:
Code:
root@freenas:/mnt/Mirrors/test # diskinfo -wS ada0
ada0
		512			 # sectorsize
		34000000512	 # mediasize in bytes (32G)
		66406251		# mediasize in sectors
		4096			# stripesize
		0			   # stripeoffset
		65879		   # Cylinders according to firmware.
		16			  # Heads according to firmware.
		63			  # Sectors according to firmware.
		INTEL SSDSC2BB080G4	 # Disk descr.
		BTWL341205HN080KGN	  # Disk ident.
		Not_Zoned	   # Zone Mode

Synchronous random writes:
		 0.5 kbytes: diskinfo: Sync write error: Bad file descriptor


But I believe above iozone number are in coherent with other's diskinfo -wS in the thread. So sounds like we are seeing something NVME specific.

Yes, this was the single-X5650 with S3700. It might be that the PCIe devices are trying and able to enter a low-power "sleep state" more readily due to their closer hardware ties; the SATA/SAS devices might always have some activity or the HBA might be keeping itself in full power mode all the time and therefore there's no wake-up latency.



I don't have that much overhead on a recordsize=16K dataset. My small-block (4-8-16K) is about 80% of the diskinfo, and from there on up it's 90%+.

Interesting. I wonder if there are something like power policy you can set on the SSDs or some keep awake command is available.
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
I guess this isn't really surprising, but the system with a higher CPU seems to perform a little better. Both system now have HT disabled and power policies off. I haven't messed with the workload setting yet.
This is the E5-2637 v2 @ 3.50GHz
Code:
root@freenas2:/mnt/RAIDZ2-I/ftp # diskinfo -wS /dev/nvd0
/dev/nvd0
		512			 # sectorsize
		280065171456	# mediasize in bytes (261G)
		547002288	   # mediasize in sectors
		0			   # stripesize
		0			   # stripeoffset
		INTEL SSDPED1D280GA	 # Disk descr.
		PHMB742401A6280CGN	  # Disk ident.

Synchronous random writes:
		 0.5 kbytes:	 13.6 usec/IO =	 35.9 Mbytes/s
		   1 kbytes:	 13.6 usec/IO =	 71.7 Mbytes/s
		   2 kbytes:	 13.9 usec/IO =	140.7 Mbytes/s
		   4 kbytes:	 11.4 usec/IO =	343.4 Mbytes/s
		   8 kbytes:	 13.2 usec/IO =	589.8 Mbytes/s
		  16 kbytes:	 17.7 usec/IO =	883.3 Mbytes/s
		  32 kbytes:	 25.7 usec/IO =   1214.7 Mbytes/s
		  64 kbytes:	 42.2 usec/IO =   1481.7 Mbytes/s
		 128 kbytes:	 74.5 usec/IO =   1678.9 Mbytes/s
		 256 kbytes:	135.9 usec/IO =   1839.4 Mbytes/s
		 512 kbytes:	253.1 usec/IO =   1975.3 Mbytes/s
		1024 kbytes:	497.2 usec/IO =   2011.4 Mbytes/s
		2048 kbytes:	977.8 usec/IO =   2045.4 Mbytes/s
		4096 kbytes:   1942.2 usec/IO =   2059.5 Mbytes/s
		8192 kbytes:   3883.2 usec/IO =   2060.2 Mbytes/s

This is the E5-2660 v2 @ 2.20GHz
Code:
root@freenas:/nonexistent # diskinfo -wS /dev/nvd0
/dev/nvd0
		512			 # sectorsize
		280065171456	# mediasize in bytes (261G)
		547002288	   # mediasize in sectors
		0			   # stripesize
		0			   # stripeoffset
		INTEL SSDPED1D280GA	 # Disk descr.
		PHMB742200WL280CGN	  # Disk ident.

Synchronous random writes:
		 0.5 kbytes:	 16.6 usec/IO =	 29.4 Mbytes/s
		   1 kbytes:	 16.8 usec/IO =	 58.3 Mbytes/s
		   2 kbytes:	 17.1 usec/IO =	114.4 Mbytes/s
		   4 kbytes:	 14.3 usec/IO =	272.8 Mbytes/s
		   8 kbytes:	 16.3 usec/IO =	479.9 Mbytes/s
		  16 kbytes:	 21.0 usec/IO =	742.3 Mbytes/s
		  32 kbytes:	 29.9 usec/IO =   1046.5 Mbytes/s
		  64 kbytes:	 46.7 usec/IO =   1337.7 Mbytes/s
		 128 kbytes:	 80.4 usec/IO =   1555.1 Mbytes/s
		 256 kbytes:	145.1 usec/IO =   1723.2 Mbytes/s
		 512 kbytes:	271.9 usec/IO =   1838.6 Mbytes/s
		1024 kbytes:	520.3 usec/IO =   1921.9 Mbytes/s
		2048 kbytes:   1026.8 usec/IO =   1947.8 Mbytes/s
		4096 kbytes:   2022.6 usec/IO =   1977.7 Mbytes/s
		8192 kbytes:   4062.9 usec/IO =   1969.0 Mbytes/s

One other point of marginal interest is that when I deleted the last partitioned entries for SLOG's and L2ARC's, FreeNAS nuked the partition table on nvd0. Not a big deal since I am doing it manually, but kind of a surprise.
 
Joined
Dec 29, 2014
Messages
1,135
Here is more info. In all tests, dataset record size=16K, compression = off.

E5-2637 v2 @ 3.50GHz - 16 x 1TB 7.2K SATA, synch=on
Code:
		Auto Mode
		File size set to 524288 kB
		OPS Mode. Output is in operations per second.
		Command line used: iozone -a -s 512M -O
		Time Resolution = 0.000001 seconds.
		Processor cache size set to 1024 kBytes.
		Processor cache line size set to 32 bytes.
		File stride size set to 17 * record size.
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	22210	22348   350757   350766   266882	18737   286963	 24860	255656	22484	22240   322526   320270
		  524288	   8	18383	18975   266731   262905   226659	17349   223236	 24098	209654	18920	19048   240630   238193
		  524288	  16	15228	15153   131974   136124   165572	15230   150321	 20555	194299	15114	15015   137915   132909
		  524288	  32	10386	10472	69991	77457	85099	10408	83147	 12991	 97516	10838	10188	96784	94763
		  524288	  64	 6218	 6387	72599	46070	32715	 6081	79995	  7067	 39917	 6337	 6147	43228	49556
		  524288	 128	 3822	 3586	19880	24214	27920	 3623	16461	  4189	 17378	 3873	 3598	26642	21476
		  524288	 256	 2184	 2503	 7482	15747	14950	 2190	16594	  2848	  8574	 2153	 2243	10616	 9835
		  524288	 512	 1280	 1249	 8055	 7778	 8406	 1348	10982	  1492	  4888	 1205	 1315	 5157	 4696
		  524288	1024	  673	  695	 2933	 3524	 3833	  693	 4603	   880	  2015	  668	  790	 1363	 2137
		  524288	2048	  332	  367	 1014	 1540	 1890	  340	 2170	   452	   956	  334	  381	 1958	  665
		  524288	4096	  158	  190	 1161	  391	  704	  167	 1057	   213	   430	  165	  191	  710	  286
		  524288	8192	   83	   94	  173	  273	  343	   82	  465	   101	   193	   81	   97	  127	  165
		  524288   16384	   40	   42	  159	  158	  177	   44	  221		44	   125	   41	   42	   88	   84

iozone test complete.

E5-2637 v2 @ 3.50GHz - 16 x 1TB 7.2K SATA, synch=off
Code:
		Auto Mode
		File size set to 524288 kB
		OPS Mode. Output is in operations per second.
		Command line used: iozone -a -s 512M -O
		Time Resolution = 0.000001 seconds.
		Processor cache size set to 1024 kBytes.
		Processor cache line size set to 32 bytes.
		File stride size set to 17 * record size.
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4   114843   122018   362978   241269   188434	91848   308571	148711	201137   127127   116531   323565   271832
		  524288	   8	81540   118284   165491   233935   180213	86578   269785	144145	155272	94278	92429   242321   233731
		  524288	  16	77753	68791   179138   183214   181462	84947   114049	132407	136606	78307	93307   156372	93732
		  524288	  32	39482	46777   115677   114027   113497	45196	67021	 70364	 81672	41994	46242   116610   116824
		  524288	  64	21015	24563	65753	61951	62989	23335	64213	 43091	 29528	21172	24314	50782	48699
		  524288	 128	10452	12571	17430	24571	27827	10608	34705	 19469	 33731	13173	 8608	25084	25141
		  524288	 256	 5098	 6386	17260	19792	19464	 4565	15371	  9817	 16109	 6404	 6153	13383	17155
		  524288	 512	 3067	 2376	 8646	 8857	 8412	 3007	 9202	  4852	  8796	 3169	 2393	 6040	 6329
		  524288	1024	 1381	 1663	 4548	 5192	 6039	 1211	 3601	  2402	  4133	 1584	 1604	 3673	 4309
		  524288	2048	  728	  646	 2093	 2173	 2190	  819	 2504	  1162	   984	  728	  814	 1545	 1465
		  524288	4096	  381	  297	  901	  947	 1022	  380	 1073	   548	  1014	  388	  289	  643	  650
		  524288	8192	  163	  188	  516	  498	  190	  163	  426	   205	   429	  214	  157	  195	  193
		  524288   16384	   79	   95	  192	  112	  143	   81	  182		96	   191	   90	   73	  100	   99

iozone test complete.

E5-2660 v2 @ 2.20GHz - 12 x 2TB 7.2K SAS, synch=on
Code:
		Auto Mode
		File size set to 524288 kB
		OPS Mode. Output is in operations per second.
		Command line used: iozone -a -s 512M -O
		Time Resolution = 0.000001 seconds.
		Processor cache size set to 1024 kBytes.
		Processor cache line size set to 32 bytes.
		File stride size set to 17 * record size.
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	16839	17686   263118   260723   202867	16181   238240	 18546	200626	18155	17832   236950   242646
		  524288	   8	14965	15762   213311   207018   198919	14915   191447	 16668	125627	15742	15541   140775   179860
		  524288	  16	12431	12833   178929   179196   153296	12124   153107	 14412	 98691	12697	11921   128532   131759
		  524288	  32	 8950	 9274   114062   106723	94870	 8487   103260	 10503	109679	 8436	 9510	76943	76026
		  524288	  64	 5569	 5333	56305	53753	55620	 5727	59360	  6539	 26771	 5558	 5930	40090	39034
		  524288	 128	 3177	 3242	31753	34034	35126	 3343	13526	  3788	 18983	 3246	 3321	12744	19249
		  524288	 256	 2045	 2258	17445	17054	16501	 2242	15440	  2469	  6907	 2057	 2235	10922	10411
		  524288	 512	 1234	 1131	 8179	 8417	 8416	 1292	 8433	  1586	  7663	 1110	 1309	 5644	 5892
		  524288	1024	  672	  705	 4005	 3989	 4036	  608	 2639	   850	  3739	  697	  714	 2874	 2787
		  524288	2048	  337	  290	 1888	 1934	 1987	  355	 2184	   430	  2095	  364	  359	 1297	  732
		  524288	4096	  151	  183	 1067	 1050	 1061	  180	 1122	   226	  1010	  162	  157	  681	  683
		  524288	8192	   82	   91	  505	  495	  507	   97	  490	   104	   172	   85	   91	  254	  267
		  524288   16384	   41	   47	  167	  167	  168	   36	  160		46	   163	   44	   45	   86	   84

iozone test complete.

E5-2660 v2 @ 2.20GHz - 12 x 2TB 7.2K SAS, synch=off
Code:
		Auto Mode
		File size set to 524288 kB
		OPS Mode. Output is in operations per second.
		Command line used: iozone -a -s 512M -O
		Time Resolution = 0.000001 seconds.
		Processor cache size set to 1024 kBytes.
		Processor cache line size set to 32 bytes.
		File stride size set to 17 * record size.
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	92507   107747   298911   299094   160922	96394   271436	134411	164669   105048   117012   268114   229604
		  524288	   8	70862   102034   248775   245248   208223	93361   242642	126410	124757	88042	99352   209706   201976
		  524288	  16	73318	69688	96840   145246   139326	69580   160112	114124	155759	78832	80864   137574   149640
		  524288	  32	37634	35554   106373   108753   102081	43050   108470	 65894	 98913	41679	30844	75341	76440
		  524288	  64	19158	22462	63627	65659	60945	22172	62165	 34429	 60664	23153	25659	50394	45067
		  524288	 128	 9722	11843	32996	32462	31827	11564	29699	 17912	 31221	12139	12010	22303	22793
		  524288	 256	 5572	 6736	19631	17522	 7029	 5149	16658	  8854	 15829	 5997	 5986	11302	11849
		  524288	 512	 2522	 2989	 8795	 8743	 9542	 3244	 9749	  4099	  3448	 2686	 2926	 5972	 5876
		  524288	1024	 1285	 1554	 4262	 4453	 4180	 1510	 4215	  2265	  4885	 1658	 1682	 2864	 2853
		  524288	2048	  609	  771	 2117	 2161	 2124	  767	 2199	  1107	  2157	  766	  774	 1455	 1426
		  524288	4096	  310	  364	  989	  966	  449	  322	 1041	   537	   973	  378	  373	  712	  699
		  524288	8192	  160	  186	  470	  479	  490	  189	  609	   211	   176	  165	  186	  260	  283
		  524288   16384	   76	   93	  178	  186	  193	   90	  219	   106	   204	   61	   91	   94	   96

iozone test complete.
 
Joined
Dec 29, 2014
Messages
1,135
These are my options on the workload setting.
upload_2018-11-2_14-18-58.png


Is there anything in your BIOS for PCIe Active State Power Management? Disable that if you can find it.
I didn't see anything relating to that.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
Joined
Dec 29, 2014
Messages
1,135
OK. I set that. This is processor power management menu.
upload_2018-11-2_14-35-34.png
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
OK. I set that. This is processor power management menu.
View attachment 26380
What are the options of each setting?
Ah, right. I remember that one from previously in the thread.

You could limit your iozone testing to only the random writes if you don't want to wait for the whole gamut to run.
I believe you can do that with -i0 -i2, though I found full test runs fast enough for me so I didn't bother
 
Joined
Dec 29, 2014
Messages
1,135
Looking at it from the BMC, here are all the options.
upload_2018-11-2_14-51-34.png

upload_2018-11-2_14-52-13.png

CPU Performance choices
upload_2018-11-2_14-46-37.png

power technology
upload_2018-11-2_14-48-18.png

P-STATE coordination
upload_2018-11-2_14-49-56.png

Energy performance
upload_2018-11-2_14-50-20.png


Everything else is just enabled or disabled.
 

Attachments

  • upload_2018-11-2_14-44-48.png
    upload_2018-11-2_14-44-48.png
    3 KB · Views: 270
  • upload_2018-11-2_14-46-9.png
    upload_2018-11-2_14-46-9.png
    20.3 KB · Views: 335
Joined
Dec 29, 2014
Messages
1,135
Also this.
upload_2018-11-2_14-53-47.png

upload_2018-11-2_14-54-21.png

upload_2018-11-2_14-54-39.png

upload_2018-11-2_14-55-2.png

upload_2018-11-2_14-55-29.png
 

Attachments

  • upload_2018-11-2_14-54-13.png
    upload_2018-11-2_14-54-13.png
    12.5 KB · Views: 289

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'd say disable all of the processor C-states and EIST (Speedstep) unless one of the other settings already implies disabling them.

Set everything to "maximum power/maximum performance" - the goal here is to get your iozone/real-world numbers as close as possible to your diskinfo benchmark - and then dial things back one by one until we find the offender.

Edit: There's your PCI ASPM (Active State Power Management) and it's presently disabled. There goes that easy answer.

upload_2018-11-2_14-56-21.png
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
I want to add that presenting C states did not hurt much performance and greatly reduce power use. Though I agree that set a baseline at max performance and dail back one by one is the way to go.

Also, if you got a SATA SSD you can test that as an SLOG as well. I believe that the disparity we saw might be smoewhat NVME specific.
 
Joined
Dec 29, 2014
Messages
1,135
C states off, speed step off, turbo boost on, workload set to i/o sensitive, CPU performance to high throughput

Primary synch on
Code:
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	22071	23686   322346   351635   288672	19699   250135	 26687	278044	23974	23854   309892   291049
		  524288	   8	19659	20227   223148   216706   212905	18485   208192	 24549	179540	20171	20097   208337   245348
		  524288	  16	15908	15716   121217   132703   180001	16117   123165	 21346	148708	16201	15527   102684   105190
		  524288	  32	11321	10842	59154	63889	73060	10783	61080	 13580	 64255	11118	10931	43275	50622
		  524288	  64	 6120	 5909	43797	41434	52483	 5867	64304	  7316	 37329	 6558	 6377	47366	19937
		  524288	 128	 3380	 2983	23504	23369	26463	 2986	24580	  3554	 15213	 3398	 3416	25831	10612
		  524288	 256	 2193	 2143	 6997	 8751	 9763	 2329	 5866	  2625	  6234	 2390	 1912	 8828	 8985
		  524288	 512	 1255	 1315	 3789	 3711	 4298	 1275	 6069	  1554	  3442	 1262	 1168	 3771	 3515
		  524288	1024	  707	  603	 3016	 2664	 3008	  751	 3154	   855	  1743	  677	  637	 1842	 1916
		  524288	2048	  355	  279	 1457	 1388	 1391	  343	  895	   426	   960	  356	  320	 1113	 1429
		  524288	4096	  153	  164	  823	  800	  313	  160	  739	   207	   349	  158	  184	  220	  256
		  524288	8192	   78	   64	  276	  254	  270	   78	  198		83	   224	   76	   65	  134	  146
		  524288   16384	   38	   39	   70	   80	   96	   41	   83		39		82	   40	   36	   71	   50

Primary synch off
Code:
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4   126696   131738   314623   226492   174691	98169   283021	172850	180081   136909   126529   274446   251683
		  524288	   8	90660   107824   183429   171899   186776   118090   147010	167124	134053   116408   110737   179643   154622
		  524288	  16	78949	92531   150946   165733   123904	62592   135819	154976	136574	98970	59733   122669   124655
		  524288	  32	43223	63343	54957	69004	65507	44869	86523	 83817	 87259	35691	46532	70746	69313
		  524288	  64	22013	28551	22020	35887	35825	22401	47447	 46340	 54705	16153	27089	39999	36246
		  524288	 128	11492	16647	13534	17512	16884	11822	24749	 23236	 23922	 9207	12568	19961	19349
		  524288	 256	 5929	 7287	14428	14639	 6685	 5361	12361	 12015	 12236	 6895	 8018	 9038	 5777
		  524288	 512	 2850	 3509	 6426	 5979	 5730	 4159	 4097	  5599	  4628	 3163	 3419	 4829	 5488
		  524288	1024	 1499	 1214	 2975	 3032	 3096	 2029	 1606	  2838	  2248	 1506	 1717	 2535	 2999
		  524288	2048	  724	  871	 1365	 1361	 1457	  861	 2112	  1410	   711	  751	  871	 1034	 1165
		  524288	4096	  352	  287	  691	  678	  807	  330	  450	   606	   551	  362	  463	  290	  364
		  524288	8192	  153	  168	  292	  245	  164	  154	  265	   189	   246	  114	  167	  129	  122
		  524288   16384	   72	   81	  119	  127	  156	   55	  119		81	   129	   90	   54	   60	   64

Secondary synch on
Code:
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	16614	18323   228308   246862   207636	15217   205052	 18711	174772	18370	18215   200231   198921
		  524288	   8	15335	16360   118567   150674   138301	14980   172296	 17261	141610	15995	16018   174805   176898
		  524288	  16	12984	12932	69587	96423   103592	13087	64715	  8851	 60066	13392	12557	89383	91403
		  524288	  32	 8738	 8463	72298	72509	70262	 9441	84441	 10180	 47421	 7540	 9205	49370	52626
		  524288	  64	 4619	 5720	50171	44121	19992	 4257	40071	  6485	 43203	 5204	 3007	38196	38414
		  524288	 128	 2867	 2679	20960	19484	19735	 3032	22171	  3336	 10854	 2925	 3104	19564	19354
		  524288	 256	 1892	 2077	11176	10642	11081	 2063	 9939	  2292	  5800	 1517	 2060	 9587	 9811
		  524288	 512	 1125	 1204	 6292	 6697	 6596	 1025	 4736	  1474	  4677	 1242	 1273	 3878	 2917
		  524288	1024	  609	  677	 2863	 2873	 2698	  676	 3228	   849	  2373	  503	  685	 2076	 2064
		  524288	2048	  311	  344	 1197	 1110	  717	  326	 1362	   415	  1274	  349	  382	 1193	  797
		  524288	4096	  153	  173	  659	  694	  641	  172	  643	   230	   822	  129	  174	  452	  449
		  524288	8192	   73	   81	  306	  295	  293	   70	  175		86	   208	   74	   79	  255	  273
		  524288   16384	   34	   34	   99	   99	   72	   33	  105		37	   114	   32	   31	   60	   58

Secondary synch off
Code:
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	96413	96301   256411   256996   213599	90548   217533	135535	192712   110965   107018   231902   237061
		  524288	   8	74444	88995   192661   213564   191279	65404   177400	132152	170565   104175   101908   110999   161890
		  524288	  16	70435	82814   134912   134257   126386	79726   124114	129566	109828	51165	80109   111353   108876
		  524288	  32	37171	46379	78117	81227	75755	46605	87018	 71293	 86707	34123	33584	67141	61405
		  524288	  64	20189	23676	45230	43349	41919	23229	41673	 38022	 48228	18180	16838	35994	33523
		  524288	 128	10154	11894	22297	21429	21845	12038	22079	 18802	 18684	11212	11168	15452	15435
		  524288	 256	 4878	 5093	10525	 9257	 9236	 4200	 7329	  9171	 10340	 5665	 5730	 8719	 7988
		  524288	 512	 2683	 3069	 5531	 4901	 4725	 2565	 3294	  4580	  4809	 2703	 2947	 4361	 4377
		  524288	1024	 1294	 1513	 2757	 2557	 2427	 1148	 1663	  2281	  2380	 1371	 1530	 2146	 1936
		  524288	2048	  659	  753	 1460	 1634	 1637	  687	  748	  1128	  1186	  679	  770	 1051	 1046
		  524288	4096	  323	  379	  731	  815	  617	  213	  594	   553	   645	  357	  383	  468	  443
		  524288	8192	  146	  189	  431	  439	  436	  202	  439	   230	   155	  150	  171	  185	  196
		  524288   16384	   66	   75	  101	   89	   81	   68	  113		77	   116	   75	   75	   73	   47
 
Joined
Dec 29, 2014
Messages
1,135
Does anyone have a good reference for how to interpret iozone results? I did a number of searches, and looked at a bunch of articles. So far none of them has clicked for me, and I feel more confused than I did when I started. :-(
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
So I was able to do compare diskinfo and iozone on a sata SSD (S3500 80G)

Code:
root@freenas:~ # diskinfo -wS /dev/ada0
/dev/ada0
		512			 # sectorsize
		34000000512	 # mediasize in bytes (32G)
		66406251		# mediasize in sectors
		4096			# stripesize
		0			   # stripeoffset
		65879		   # Cylinders according to firmware.
		16			  # Heads according to firmware.
		63			  # Sectors according to firmware.
		INTEL SSDSC2BB080G4	 # Disk descr.
		BTWL341205HN080KGN	  # Disk ident.
		Not_Zoned	   # Zone Mode

Synchronous random writes:
		 0.5 kbytes:	242.1 usec/IO =	  2.0 Mbytes/s
		   1 kbytes:	226.8 usec/IO =	  4.3 Mbytes/s
		   2 kbytes:	188.8 usec/IO =	 10.3 Mbytes/s
		   4 kbytes:	105.9 usec/IO =	 36.9 Mbytes/s
		   8 kbytes:	128.1 usec/IO =	 61.0 Mbytes/s
		  16 kbytes:	167.1 usec/IO =	 93.5 Mbytes/s
		  32 kbytes:	293.4 usec/IO =	106.5 Mbytes/s
		  64 kbytes:	581.9 usec/IO =	107.4 Mbytes/s
		 128 kbytes:   1165.3 usec/IO =	107.3 Mbytes/s
		 256 kbytes:   2340.6 usec/IO =	106.8 Mbytes/s
		 512 kbytes:   4675.8 usec/IO =	106.9 Mbytes/s
		1024 kbytes:   9372.6 usec/IO =	106.7 Mbytes/s
		2048 kbytes:  18763.4 usec/IO =	106.6 Mbytes/s
		4096 kbytes:  37571.9 usec/IO =	106.5 Mbytes/s
		8192 kbytes:  75412.3 usec/IO =	106.1 Mbytes/s

root@freenas:/mnt/Mirrors/test # iozone -a -O -s 512M
		Iozone: Performance Test of File I/O
				Version $Revision: 3.457 $
				Compiled for 64 bit mode.
				Build: freebsd

		Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
					 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
					 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
					 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
					 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
					 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
					 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
					 Vangel Bojaxhi, Ben England, Vikentsi Lapa,
					 Alexey Skidanov.

		Run began: Thu Nov  8 17:10:10 2018

		Auto Mode
		OPS Mode. Output is in operations per second.
		File size set to 524288 kB
		Command line used: iozone -a -O -s 512M
		Time Resolution = 0.000001 seconds.
		Processor cache size set to 1024 kBytes.
		Processor cache line size set to 32 bytes.
		File stride size set to 17 * record size.
															  random	random	 bkwd	record	stride
			  kB  reclen	write  rewrite	read	reread	read	 write	 read   rewrite	  read   fwrite frewrite	fread  freread
		  524288	   4	 6289	 6446   409685   412461   333279	 6173   361585	  6727	332079	 6452	 6368   381993   384883
		  524288	   8	 5056	 5174   324668   332411   286215	 5209   308769	  5111	285408	 5226	 5202   246454   254724
		  524288	  16	 4048	 4115   243949   244802   228206	 4127   228219	  4298	178677	 4114	 4105   189926   212142
		  524288	  32	 2864	 2857   114890   118830   111543	 2870   113674	  2956	106648	 2848	 2869   126581   127929
		  524288	  64	 1595	 1600	64382	65234	62093	 1598	66570	  1598	 81852	 1527	 1442	65119	66082
		  524288	 128	  796	  788	33071	33337	32534	  795	40082	   797	 31684	  797	  798	33961	30793
		  524288	 256	  404	  400	17494	17589	17193	  405	24015	   404	 23346	  403	  404	13854	13820
		  524288	 512	  202	  203	 9043	 9107	 8996	  203	 8540	   203	  8879	  204	  203	 7154	 7209
		  524288	1024	  102	  101	 4672	 4703	 4637	  101	 4676	   102	  4543	  102	  101	 3673	 3669
		  524288	2048	   51	   50	 2327	 2331	 2316	   51	 3254		50	  3183	   51	   50	 1814	 1841
		  524288	4096	   25	   25	 1606	 1623	 1608	   25	 1545		25	  1138	   25	   25	  901	  925
		  524288	8192	   12	   12	  590	  592	  578	   12	  796		12	   549	   12	   12	  337	  352
		  524288   16384		6		6	  175	  175	  175		6	  173		 6	   172		6		6	   85	   87

iozone test complete.



recordsize=16K, compression=off, sync=always. iozone is ~70% to ~95% of diskinfo depending on write io size, which is a reasonable overhead. OTOH on P3700 is ~25%.
This suggest that the SLOG implementation is optimized towards SATA speed and cannot keep up with NVME, and diminishing return on faster SLOG devices.
Not sure if this worth a bug report/feature request?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
May as well try.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Joined
Dec 29, 2014
Messages
1,135
FYI, I am still looking for some kind of resource to help me learn to interpret the results of iozone tests for myself. I still feel clueless in that regard.

On a different note, the change in record size of the dataset on my NFS mounted VMware datastore has a HUGE impact on the performance of storage Vmotion.
Here is from FreeNAS to a host with local storage, 16K record size.
Vmotion-FN2 to UCS2-16k.PNG

Here is the transfer from local storage back to FreeNAS, 16K record size.
Vmotion-UCS2 to FN2-16k.PNG

After this, I migrated off the primary FreeNAS to the secondary, changed record size in the dataset back to default, and then migrated the VM's back to the primary.
Here is from FreeNAS to a host with local storage, default record size.
Vmotion-FN2 to UCS2-default-record-size.PNG

Here is the transfer from local storage back to FreeNAS, default record size.
Vmotion-UCS2 to FN2-default-record-size.PNG

I do understand that storage Vmotion and regular VM guest operations have different requirements, but I was surprised at the big differences during the Vmotion operations. I have not experienced anything that seemed like poor VM performance when the dataset is at the default record size. What are the up and down sides of the different record sizes during normal operations? I don't have high performance needs, and I am only staring at this when I am doing Vmotion operations. I think I am leaning towards going back to the default record size so I don't have to wait as long when I have a need to move VM's around.
 
Last edited:
Status
Not open for further replies.
Top