smb read speed regardless of type

derringer

Cadet
Joined
Apr 14, 2023
Messages
5
Hello, I am wondering if someone can just post the expected speeds for a relatively simple example setup so that I know if I need to spend any more time researching the performance limitations of zfs in TrueNAS or if this is expected behavior.

I have a 6core/12thread Ivy bridge era i7 at somewhere between 3.5 and 4 ghz as cpu. Have tried LSI sata/sas 6g HBA as well as motherboard 3G and 6G sata ports. Have used 16 - 32gb of RAM. Am using a Chenbro 10G network card communicating on the other end with a 10G Intel 10G card under in an ESXI virtual machine running windows server 2016. I have also performed the below tests on both CORE and SCALE with similar results.

I give the above info to try and stay within forum rules, but I really don't want to bog down in the specifics of the above server build. I am well versed in hardware and have built dozens of high performance servers both commercially and personally and know what I'm doing. This is a retired vmware server that still has enough good hardward in it to be a nice NAS, so i gave TrueNAS a whirl on it.

My observation is the following, and I'd love if people could just jump in and let me know if these are about right or if something is obviously wrong. All drives are a few year old 7200 rpm sata and sas drives that i tested with, in the 12-16tb family (seagate exos mostly.) The use case I am testing has got to be a fairly common one: single large sequential file transfer like what one would see copying large high resolution videos or backup files from the SMB share to a local drive on the ESXI server that is backed by a 6diskRAID10 and capable of 500MB/s both read and write. Compression is OFF, recordsize is 1M, atime is off, ashift is 12.

2 disk Mirror (1vdev): 80MB/s Read , 200MB/s Write
4 disk Mirror (2vdevs): 150Mb/s Read , 450Mb/s Write

4 disk RaidZ1 (1vdev): 145Mb/s Read , 300Mb/s Write

I have dozens more disks but I am not even going to continue unless someone can verify that these are in the balllpark. What I am specifically referring to is the READ speed. It is, frankly, unacceptably low, and I'll have to find another solution if this is typical of spinning disk performance under ZFS. If someone could verify that something isn't massively wrong, I can move on without testing larger arrays. It seems to me from observing the performance of each disk, that it is spreading reads across all 4 disks, in the 4 disk Mirror for instance, and it must either have too small a read buffer or is repositioning the head and incurring performance penalties from doing so (neither of which is desirable.) I know that ZFS does a lot of things in the background and that is why I was hoping this test would perform better so I could use it, but a 4 disk mirror array should be able to read at at least 500Mb/s to match even the worst hardware raid solutions. It just doesn't, in its current incantation, appear to perform well on large file sequential transfers. Does anyone else have conflicting information or arrays larger or equal to above where they can transfer singular large files faster?

Thanks for any assistance or further datapoints.
 

derringer

Cadet
Joined
Apr 14, 2023
Messages
5
New numbers, please disregard above. Found a faulty SSD array that needed to be trimmed causing the slowdown on the read target. Here are the new numbers, but I have similar questions. It doesn't appear that ZFS uses the extra mirror spindles like many hardware RAID solutions do. Is this correct? Are these reasonable numbers or do I still have an issue?

2 disk Mirror (1 vdev): 260Mb/s Read, 220Mb/s Write
4 disk Mirror (2 vdevs): 500Mb/s Read, 440MB/s Write

3 disk stripe (3 vdevs): 690Mb/s Read, 700Mb/s Write

haven't tested the others yet.

What seems odd is a 4 disk Mirror should read from all 4 spindles, yet it does not. Is this the design of ZFS? Seems wasteful. You have 4 copies of the data and you don't use it? I just want to make sure I am getting this correctly. Is it the same for iops? If so, no wonder people say it is not a high performance system. The iops and data are available from 4 spindles and it only uses 2?

Thanks for any clarification,
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
I also have 4 SSDs in 2 mirror vdev config. I just did a test with DD to see how it looks now.

For writing a 256GB file:
dd if=/dev/zero of=test.dat bs=1m count=256000
256000+0 records in
256000+0 records out
268435456000 bytes transferred in 1024.291086 secs (262069503 bytes/sec)

For reading it back, I first read about 300GB of other data from the server in an attempt to reduce the impact on ARC (I have 256GB memory). Not 100% effective, but that's all I've got time for at the moment for casual testing:

Reading back:
dd of=/dev/null if=test.dat bs=1m count=256000
256000+0 records in
256000+0 records out
268435456000 bytes transferred in 154.230371 secs (1740483762 bytes/sec)

I monitored the disks during the readback and it appears that all 4 disks were used the entire time to read back the file at once. I could not have read 1.6GBps unless it was using them all.

From what I recall, this matches my tests from a few years ago. I do remember that for some reason, the load was split between SSDs in mirror vdevs, but not between HDD vdevs.
 
Top