ChillyPenguin
Dabbler
- Joined
- Jun 13, 2014
- Messages
- 10
FreeNAS-9.10-STABLE-201606072003 (696eba7)
Single L5640, 48GB, Supermicro X8 motherboard
8x6TB in RAIDZ2
White Label (WD Red) hard drives on a Dell H310 flashed to IT mode
So, this looks like about 67MB/s which seems very slow to me. Sequential file copy from my old NAS over SCP exceeded 250MB/s, which I understand is a difference workload, just using that to show I think things are OK in general.
This is gstat during a copy.
dT: 1.019s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 652 230 19588 10.0 418 29587 1.4 68.1| da0
1 632 224 19086 9.7 404 29410 1.4 65.0| da1
4 636 225 19137 8.7 407 29543 1.5 70.7| da2
6 653 222 18933 11.3 427 30407 1.3 73.7| da3
0 656 232 19769 10.6 420 29567 1.3 74.7| da4
0 698 234 19942 9.2 460 33014 1.3 66.9| da5
4 614 232 19761 15.0 379 26097 2.7 88.5| da6
0 683 237 20260 9.8 442 31263 1.5 67.4| da7
I am looking for a sanity check on the performance. My old NAS with 6 drives in MDADM raid6 did about double this speed for this work load (read, re-write to same array) and I was surprised ZFS would be so much slower. If this performance is what I should expect, so be it. I just want verification from someone. I am primarily interested in performance of cold data (not in ARC) for this use case, so I don't think adding RAM will help.
Side note, does anyone know the command to verify the supported queue depth on the H310 in FreeNAS? My google-fu appears to be lacking. I think I got it to 600, but am concerned if the card is still at 25 this could be the root cause.(Does this matter here?) Thoughts?
Single L5640, 48GB, Supermicro X8 motherboard
8x6TB in RAIDZ2
White Label (WD Red) hard drives on a Dell H310 flashed to IT mode
Code:
/usr/bin/time mv /some/dataset/folder /some/otherdataset/folder 1844.84 real 1.31 user 190.39 sys du -sh /some/otherdataset/folder 125GB
So, this looks like about 67MB/s which seems very slow to me. Sequential file copy from my old NAS over SCP exceeded 250MB/s, which I understand is a difference workload, just using that to show I think things are OK in general.
This is gstat during a copy.
dT: 1.019s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 652 230 19588 10.0 418 29587 1.4 68.1| da0
1 632 224 19086 9.7 404 29410 1.4 65.0| da1
4 636 225 19137 8.7 407 29543 1.5 70.7| da2
6 653 222 18933 11.3 427 30407 1.3 73.7| da3
0 656 232 19769 10.6 420 29567 1.3 74.7| da4
0 698 234 19942 9.2 460 33014 1.3 66.9| da5
4 614 232 19761 15.0 379 26097 2.7 88.5| da6
0 683 237 20260 9.8 442 31263 1.5 67.4| da7
I am looking for a sanity check on the performance. My old NAS with 6 drives in MDADM raid6 did about double this speed for this work load (read, re-write to same array) and I was surprised ZFS would be so much slower. If this performance is what I should expect, so be it. I just want verification from someone. I am primarily interested in performance of cold data (not in ARC) for this use case, so I don't think adding RAM will help.
Side note, does anyone know the command to verify the supported queue depth on the H310 in FreeNAS? My google-fu appears to be lacking. I think I got it to 600, but am concerned if the card is still at 25 this could be the root cause.(Does this matter here?) Thoughts?