New to freenas, fast write, broken read speed.

paradiss

Cadet
Joined
Apr 14, 2020
Messages
4
I built a new server this month to replace my ubuntu with a hardware server with this more friendly option.

AMD 3600
MSI X470
24 GB Ram
10x 6TB Toshiba MG06ACA600EY
LSI 9211-8i, IT flashed with current Firmware

Right now i have all 10 drives in a stripe to test performance. Raid Z2 is my end goal, which also suffer from the same read performance.

GSTAT while doing both a dd write of a 64 gig file, and read of the same 64 gig file. The performance is the same when reading and writing to the pool through a SMB share.

unknown.png


I am quite new to Freenas, and still reading some of the documentation and I have ran out of possible tests to conduct.


Are there any tests, or commands to pull up other info which may show the reason for the poor read performance?
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Is that what it looks like for the entire read? What are the actual results from the dd read? I would assume that the above read screenshot would be valid for at least part of the test as it reads some of the data from ARC.
 

paradiss

Cadet
Joined
Apr 14, 2020
Messages
4
Is that what it looks like for the entire read? What are the actual results from the dd read? I would assume that the above read screenshot would be valid for at least part of the test as it reads some of the data from ARC.


dd test.jpg


I just ran another dd test to provide the best possible results to your question.


In addition, I wanted to provide what a SMB read looks like.
Ten disk stripe -> Windows 10 M.2 SSD
smb transfer.jpg

M.2 SSD on the freenas machine -> Windows 10 M.2 SSD
smb ssd transfer.jpg
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
That is weird. Have you tried bypassing the HBA and just using SATA connectors on the motherboard? Or the inverse, just the HBA? I assume your model has 6, so you may have to re-create the pool. Others may have more ideas.
 

paradiss

Cadet
Joined
Apr 14, 2020
Messages
4
That is weird. Have you tried bypassing the HBA and just using SATA connectors on the motherboard? Or the inverse, just the HBA? I assume your model has 6, so you may have to re-create the pool. Others may have more ideas.


I spent the last couple hours attempting many different configurations based of your suggestion.

I did a stripe of 6 drives directly off motherboard, good performance

1587025338893.png

Write speeds pegs all 6 drives at 100% busy, and easy 1 GB/s writes. This screenshot is reads, which was something like 600mb/s

I built the same array with 6 on mobo and one on the HBA, the performance decreased by about 120 MB/s and % busy also took a hit. Rebuilt the raid with 6 on mobo and TWO on the HBA, even worse performance than before. I rebuilt the array with 6 on mobo and 4 on hba, and performance was basically non existant, lots of dips to 20 MB/s

1587025683626.png

Screenshot from the 10 Disk Stripe, only 2 disks would be pinged at a time.

I then built an array of 4 disks on the HBA only, and it worked .... sort of, but not nearly as good as mobo.


With this information, I swaped HBA's from the LSI 9211-8i to a LSI 9207-8i, same issue.


Does this spark any ideas of a possible cause or a setting I need to change? I feel like my drives just do not agree with the HBA's, I cant find any info about the drives, no reviews, no benchmarks, its like they appeared on the market out of the void.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Just an idea, but have you tested all the disks individually for problems/bad blocks or at least run long smart tests on them? One bad disk could bring down performance for an entire vdev. Search the forums for method on how to do this. Using tmux you can run badblocks on all of them at once and just let them cook for a day or so.

Also, I personally like gstat -p as it shows only physical disks making it easier to read.

If drive testing doesn't reveal anything, you may want to run your tests on one drive at a time and see what happens.
 

paradiss

Cadet
Joined
Apr 14, 2020
Messages
4
Just an idea, but have you tested all the disks individually for problems/bad blocks or at least run long smart tests on them? One bad disk could bring down performance for an entire vdev. Search the forums for method on how to do this. Using tmux you can run badblocks on all of them at once and just let them cook for a day or so.

Also, I personally like gstat -p as it shows only physical disks making it easier to read.

If drive testing doesn't reveal anything, you may want to run your tests on one drive at a time and see what happens.

Will do that, I orig started with unraid, and would prefer the performance of freenas. I did the pre-clear on all of the drives with no errors.

I will check the forums for a test and let it run.

Thank you for the tip about -p, does indeed make it nicer to look at.

Appreciate all of your replies. Really racking my brain trying to figure out why its struggling on any HBA.
 
Top