Identifying performance and/or need for upgrade

Status
Not open for further replies.

Tancients

Dabbler
Joined
May 25, 2015
Messages
23
Current setup: I have a 36 bay Supermicro running Freenas 9.10. Currently only 12 of the bays are full, and using a single M1015 card flashed to IT mode. It serves VMs to ESXi via iSCSI over 10GbE Intel FC as well as SMB/NFS publically available over the intel 1Gb nics on the motherboard. I have 96Gb ram which may be overkill, since it's completely dedicated to ZFS. Each of the drives is a WD4TB Red NAS, with a current total usable of 24Tb, but I only allocated 12Tb in order to keep ZFS under 50% utilization due to using iSCSI.

I setup the 12 drives in a raid 10 setup, just a bunch of mirrors. zpool status output below:

Code:
pool: Poolbase
	 state: ONLINE																													
	  scan: scrub repaired 0 in 2h29m with 0 errors on Sat Apr 15 04:29:35 2017
	config:
		NAME											STATE	 READ WRITE CKSUM
			Poolbase										ONLINE	   0	 0	 0
			  mirror-0									  ONLINE	   0	 0	 0			
			gptid/5e62dcdd-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
			gptid/5ee03eb0-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
		  mirror-1									  ONLINE	   0	 0	 0												
			gptid/5f65db21-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
			gptid/5fed5d33-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
		  mirror-2									  ONLINE	   0	 0	 0												
			gptid/606fab56-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
			gptid/60f2bae5-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
		  mirror-3									  ONLINE	   0	 0	 0												
			gptid/618a6630-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
			gptid/96afa2a2-9272-11e6-8020-0025902afba2  ONLINE	   0	 0	 0												
		  mirror-4									  ONLINE	   0	 0	 0												
			gptid/62a67e32-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
			gptid/63404443-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
		  mirror-5									  ONLINE	   0	 0	 0												
			gptid/63cc4aad-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0												
			gptid/64569a29-0ff8-11e5-9352-0025902afba2  ONLINE	   0	 0	 0


I should mention this is more of a homelab than a production environment, but I do try to follow good practices so long as it makes financial sense to do so.
I was looking at upgrading my storage, and to try to improve performance. I think my current storage isn't performing to capacity, just from local tests, but my testing methods could be wrong, so I'm posting here to doublecheck.

First I tested using what I found http://louwrentius.com/74tb-diy-nas-based-on-zfs-on-linux.html and the results are below.
Code:
[root@freenas /mnt/Poolbase]# dd if=/dev/zero of=test.bin bs=1M count=100000
	100000+0 records in
	100000+0 records out
	104857600000 bytes transferred in 37.415629 secs
	(2802508017 bytes/sec)
	[root@freenas /mnt/Poolbase]# dd if=test.bin of=/dev/null bs=1M
	100000+0 records in
	100000+0 records out
	104857600000 bytes transferred in 19.719403 secs
	(5317483488 bytes/sec)


But I've been informed I should be testing with rsync and a ramdisk for more realistic results. So I created and mounted a ramdisk on /mnt/temp using the following command: mdmfs -s 10G md1 /mnt/temp/ and then ran test by copying an iso to and from it, per below.
Code:
	[root@freenas /mnt/Poolbase/Temp4now]# rsync en_windows_10_enterprise_2016_ltsb_x86_dvd_9060010.iso /mnt/temp/ --progress
	en_windows_10_enterprise_2016_ltsb_x86_dvd_9060010.iso
	  2,642,147,328 100%  170.37MB/s	0:00:14 (xfr#1, to-chk=0/1)
	[root@freenas /mnt/Poolbase/Temp4now]# rsync /mnt/temp/en_windows_10_enterprise_2016_ltsb_x86_dvd_9060010.iso en.iso --progress
	en_windows_10_enterprise_2016_ltsb_x86_dvd_9060010.iso
			 32,768   0%	0.00kB/s	0:00:00
	  2,642,147,328 100%  175.75MB/s	0:00:14 (xfr#1, to-chk=0/1)


But this performance seems below what I should be able to get from 6 spindles of WD Red 4TB drives, at least half as much.

I did see the commands referenced here regarding testing performance, so their output is below.
Code:
[root@freenas /mnt/Poolbase/Temp4now]# dd if=/dev/zero of=tmp.dat bs=2048k count
=50k																			
51200+0 records in															
51200+0 records out															
107374182400 bytes transferred in 277.660321 secs (386710575 bytes/sec)		
[root@freenas /mnt/Poolbase/Temp4now]# dd if=tmp.dat of=/dev/null bs=2048k count
=50k																			
51200+0 records in															
51200+0 records out															
107374182400 bytes transferred in 152.828701 secs (702578650 bytes/sec)		


So, the questions I have:
1. Which test is the most accurate?
2. I'm currently using only a single M1015 in IT mode to support these drives connected to a backplane. Am I hitting capacity of the card maybe?
3. It looks like my ARC hit ratio never exceeds 88.4%, is there anything to do to tune it? It starts out slow at about 60% on boot and then rises up over a few days. Possibly too much ram?
 
Last edited by a moderator:

Vito Reiter

Wise in the Ways of Science
Joined
Jan 18, 2017
Messages
232
You're still well exceeding the capabilities of your network. From what I can obtain from all of this your speeds are looking good for such large drives, but there is definitely some posts and documentation on better performance tweaks. Also, from what I know Raid0 in the first and second vdev makes a huge difference, but after that it kind of just levels out. Anyway, hope you can figure this out you're definitely not the only one with this problem.

Edit: Also, the more accurate tests are going to be over-the-network tests. Manually moving files the size of the files your operations use, whether it be big or small. Doing in-the-box operations are really variable.
 
Status
Not open for further replies.
Top