Sanity check on expected copy performance

Status
Not open for further replies.

ChillyPenguin

Dabbler
Joined
Jun 13, 2014
Messages
10
FreeNAS-9.10-STABLE-201606072003 (696eba7)

Single L5640, 48GB, Supermicro X8 motherboard

8x6TB in RAIDZ2
White Label (WD Red) hard drives on a Dell H310 flashed to IT mode

Code:
/usr/bin/time mv /some/dataset/folder /some/otherdataset/folder
1844.84 real 1.31 user 190.39 sys
du -sh /some/otherdataset/folder
125GB


So, this looks like about 67MB/s which seems very slow to me. Sequential file copy from my old NAS over SCP exceeded 250MB/s, which I understand is a difference workload, just using that to show I think things are OK in general.

This is gstat during a copy.

dT: 1.019s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 652 230 19588 10.0 418 29587 1.4 68.1| da0
1 632 224 19086 9.7 404 29410 1.4 65.0| da1
4 636 225 19137 8.7 407 29543 1.5 70.7| da2
6 653 222 18933 11.3 427 30407 1.3 73.7| da3
0 656 232 19769 10.6 420 29567 1.3 74.7| da4
0 698 234 19942 9.2 460 33014 1.3 66.9| da5
4 614 232 19761 15.0 379 26097 2.7 88.5| da6
0 683 237 20260 9.8 442 31263 1.5 67.4| da7

I am looking for a sanity check on the performance. My old NAS with 6 drives in MDADM raid6 did about double this speed for this work load (read, re-write to same array) and I was surprised ZFS would be so much slower. If this performance is what I should expect, so be it. I just want verification from someone. I am primarily interested in performance of cold data (not in ARC) for this use case, so I don't think adding RAM will help.

Side note, does anyone know the command to verify the supported queue depth on the H310 in FreeNAS? My google-fu appears to be lacking. I think I got it to 600, but am concerned if the card is still at 25 this could be the root cause.(Does this matter here?) Thoughts?
 

ChillyPenguin

Dabbler
Joined
Jun 13, 2014
Messages
10
Unfortunately, no I do not have answers yet. I have also noticed this system takes a long time to delete files, running about 100 items/second. Any help or suggestions on diagnostic methods would be appreciated.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Create a dataset that has compression turned off then use dd to write a file to that dataset. Make sure the fine if 50GB in size and use 1M block size.

Your system should read and write around 450MB/s.
 

ChillyPenguin

Dabbler
Joined
Jun 13, 2014
Messages
10
Code:
[root@freenas] /mnt/tank/iotesting# dd if=/dev/zero of=iotest.out bs=1024k count=50000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 62.312933 secs (841379109 bytes/sec)


This looks like ~841MB/s if my math is right.

I did this, and gave up a bit early I guess. Reads look like about 147MB/s which seems slow.

Code:
[root@freenas] /mnt/tank/iotesting# dd if=iotest.out of=/dev/null
^C94335599+0 records in
94335599+0 records out
48299826688 bytes transferred in 327.932531 secs (147285866 bytes/sec)


Ninja edit:
I re-ran the read test with BS=1024k... way faster.

Code:
[root@freenas] /mnt/tank/iotesting# dd if=iotest.out of=/dev/null bs=1024k
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 71.552339 secs (732733558 bytes/sec)


I am using a 1M blocksize on the datasets, do I need to tune something to get Freenas to use this for all file operations?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
No tuning required. Your pool is strangely a little too fast if you ask me. Are you using 7200rpm drives?
 

ChillyPenguin

Dabbler
Joined
Jun 13, 2014
Messages
10
No, the WL drives report 5700rpm in the smart data. Each one benchmarked at a little over 160MB/s sequential write during burn in testing before I installed freenas.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Either way your bottleneck isn't your drives. It might be your low power CPU that is the limiting factor.
 

ChillyPenguin

Dabbler
Joined
Jun 13, 2014
Messages
10
I don't follow. If the pool will read and write at the above speeds, where does the CPU become a bottleneck in move or copy operations to the point I get 67MB/s? I'll check again but I don't think cpu load was ever high.

Any test to confirm your suspicion? I thought the L5640 would be more than enough power.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Well a mv in the same dataset will be instantaneous but a cp will use CPU horse power.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
where does the CPU become a bottleneck in move or copy operations to the point I get 67MB/s? I'll check again but I don't think cpu load was ever high.
SCP likely relies on the single thread CPU for encryption.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm not sure, but mv might have to fall back to a copy and a delete across data sets.

The copy side might be using a sync write.

Without a slog, and with essentially random r/w io on a possibly fragmented array, this could be pretty much a worst case scenario.
 

ChillyPenguin

Dabbler
Joined
Jun 13, 2014
Messages
10
I'm not sure, but mv might have to fall back to a copy and a delete across data sets.

The copy side might be using a sync write.

Without a slog, and with essentially random r/w io on a possibly fragmented array, this could be pretty much a worst case scenario.

The pool has about 70% free space, so fragmentation should not be an issue.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Status
Not open for further replies.
Top