Hello, I am wondering if someone can just post the expected speeds for a relatively simple example setup so that I know if I need to spend any more time researching the performance limitations of zfs in TrueNAS or if this is expected behavior.
I have a 6core/12thread Ivy bridge era i7 at somewhere between 3.5 and 4 ghz as cpu. Have tried LSI sata/sas 6g HBA as well as motherboard 3G and 6G sata ports. Have used 16 - 32gb of RAM. Am using a Chenbro 10G network card communicating on the other end with a 10G Intel 10G card under in an ESXI virtual machine running windows server 2016. I have also performed the below tests on both CORE and SCALE with similar results.
I give the above info to try and stay within forum rules, but I really don't want to bog down in the specifics of the above server build. I am well versed in hardware and have built dozens of high performance servers both commercially and personally and know what I'm doing. This is a retired vmware server that still has enough good hardward in it to be a nice NAS, so i gave TrueNAS a whirl on it.
My observation is the following, and I'd love if people could just jump in and let me know if these are about right or if something is obviously wrong. All drives are a few year old 7200 rpm sata and sas drives that i tested with, in the 12-16tb family (seagate exos mostly.) The use case I am testing has got to be a fairly common one: single large sequential file transfer like what one would see copying large high resolution videos or backup files from the SMB share to a local drive on the ESXI server that is backed by a 6diskRAID10 and capable of 500MB/s both read and write. Compression is OFF, recordsize is 1M, atime is off, ashift is 12.
2 disk Mirror (1vdev): 80MB/s Read , 200MB/s Write
4 disk Mirror (2vdevs): 150Mb/s Read , 450Mb/s Write
4 disk RaidZ1 (1vdev): 145Mb/s Read , 300Mb/s Write
I have dozens more disks but I am not even going to continue unless someone can verify that these are in the balllpark. What I am specifically referring to is the READ speed. It is, frankly, unacceptably low, and I'll have to find another solution if this is typical of spinning disk performance under ZFS. If someone could verify that something isn't massively wrong, I can move on without testing larger arrays. It seems to me from observing the performance of each disk, that it is spreading reads across all 4 disks, in the 4 disk Mirror for instance, and it must either have too small a read buffer or is repositioning the head and incurring performance penalties from doing so (neither of which is desirable.) I know that ZFS does a lot of things in the background and that is why I was hoping this test would perform better so I could use it, but a 4 disk mirror array should be able to read at at least 500Mb/s to match even the worst hardware raid solutions. It just doesn't, in its current incantation, appear to perform well on large file sequential transfers. Does anyone else have conflicting information or arrays larger or equal to above where they can transfer singular large files faster?
Thanks for any assistance or further datapoints.
I have a 6core/12thread Ivy bridge era i7 at somewhere between 3.5 and 4 ghz as cpu. Have tried LSI sata/sas 6g HBA as well as motherboard 3G and 6G sata ports. Have used 16 - 32gb of RAM. Am using a Chenbro 10G network card communicating on the other end with a 10G Intel 10G card under in an ESXI virtual machine running windows server 2016. I have also performed the below tests on both CORE and SCALE with similar results.
I give the above info to try and stay within forum rules, but I really don't want to bog down in the specifics of the above server build. I am well versed in hardware and have built dozens of high performance servers both commercially and personally and know what I'm doing. This is a retired vmware server that still has enough good hardward in it to be a nice NAS, so i gave TrueNAS a whirl on it.
My observation is the following, and I'd love if people could just jump in and let me know if these are about right or if something is obviously wrong. All drives are a few year old 7200 rpm sata and sas drives that i tested with, in the 12-16tb family (seagate exos mostly.) The use case I am testing has got to be a fairly common one: single large sequential file transfer like what one would see copying large high resolution videos or backup files from the SMB share to a local drive on the ESXI server that is backed by a 6diskRAID10 and capable of 500MB/s both read and write. Compression is OFF, recordsize is 1M, atime is off, ashift is 12.
2 disk Mirror (1vdev): 80MB/s Read , 200MB/s Write
4 disk Mirror (2vdevs): 150Mb/s Read , 450Mb/s Write
4 disk RaidZ1 (1vdev): 145Mb/s Read , 300Mb/s Write
I have dozens more disks but I am not even going to continue unless someone can verify that these are in the balllpark. What I am specifically referring to is the READ speed. It is, frankly, unacceptably low, and I'll have to find another solution if this is typical of spinning disk performance under ZFS. If someone could verify that something isn't massively wrong, I can move on without testing larger arrays. It seems to me from observing the performance of each disk, that it is spreading reads across all 4 disks, in the 4 disk Mirror for instance, and it must either have too small a read buffer or is repositioning the head and incurring performance penalties from doing so (neither of which is desirable.) I know that ZFS does a lot of things in the background and that is why I was hoping this test would perform better so I could use it, but a 4 disk mirror array should be able to read at at least 500Mb/s to match even the worst hardware raid solutions. It just doesn't, in its current incantation, appear to perform well on large file sequential transfers. Does anyone else have conflicting information or arrays larger or equal to above where they can transfer singular large files faster?
Thanks for any assistance or further datapoints.