Fast when striped, slow with anything else… CPU? [RESOLVED]

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Hi!

I have a TrueNAS Server with 8x 2TB WD Red EFXR. I had it configured as two 4 drive vdevs as Z1, combining to around 11TB.
The CPU is a i5-3470T CPU @ 2.90GHz, the Server has 8GB ECC Memory.

It always did well when accessed via Gigabit LAN, around 100MB/s as expected.
Now I've put a qnap 10Gbit/s card in there and connect to it via a 2,5Gbit/s USB Dongle.
It stays around 100MB/s when reading, around 220MB/s writing. Only if I destroy the Pool and set it up as a complete stripe I can saturate the 2,5Gbit/s.
So I guess it's NOT the drives or the controller, I suspect the CPU - but it just peaks around 50% in the dashboard if I hit the share with BM Disk Speed Test.
Using a SSD as a cache doesn't help anything.

Any suggestion for a strategy to find the bottleneck?

Thanks,
Jörg




-------- SOLUTION ---------

Well, it's been the controller. I changed it out for a controller with LSI 2008 chipset and flashed it to IT mode. Around 240MB/s read and write speed over 2.5 Gbit/s network.Thanks for all the suggestions!

Jörg
 

Attachments

  • Screen Shot 2022-06-01 at 16.39.36.png
    Screen Shot 2022-06-01 at 16.39.36.png
    171.7 KB · Views: 80
Last edited:

LarsR

Guru
Joined
Oct 23, 2020
Messages
719
get more ram, the new recommended amount is 16gb, 32 would be better.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Hm. I've put non-ECC Ram in there for a test. The write speeds are more consistent now, but read still tops out at around 80MB/s
 

Attachments

  • Screen Shot 2022-05-09 at 08.07.29.png
    Screen Shot 2022-05-09 at 08.07.29.png
    563.7 KB · Views: 89
  • Screen Shot 2022-05-09 at 08.08.09.png
    Screen Shot 2022-05-09 at 08.08.09.png
    31.6 KB · Views: 81
  • Screen Shot 2022-05-09 at 08.08.47.png
    Screen Shot 2022-05-09 at 08.08.47.png
    27.2 KB · Views: 78

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Oh, I guess I missed something. This is another test with just a stripe over all 8 Drives, I only get around 100MB/s there, too. So what is the Problem? The controller is a AMCC 9650SE-8LP set to passthrough… maybe this?
 

Attachments

  • Screen Shot 2022-05-09 at 08.14.57.png
    Screen Shot 2022-05-09 at 08.14.57.png
    556.9 KB · Views: 85

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Your problem seems to be related to the understanding of the ZFS fundamentals. A stripe vedv doesn't give you performance benefit as would a true RAID controller.
If you really want to test your network throughput, exclude the pool form the equation and use iperf instead.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Since I get consistent write speeds of around 250MB/s I guess my network throughput is fine. I'll try to get the controller out of the equation and test a single RaidZ1 vdev connected to the mainboard.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Any single vdev will give you roughly the speed of a single disk. So a pair of mirrors will be roughly twice as fast as a 4-disk RAIDZ1. Best expected performance in your case would be 4 2-disk mirrors.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
yes, I already tested 4 2-disk mirrors with no effect. I really suspect the controller. Maybe I test 2 2-disk mirrors connected to the board.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Maybe I could drop in one more drive and do a stripe over 3 3-disk RaidZ1 :smile: That should give me around 300MB/s.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
So… a striped mirror made of 4 disks attached to the mainboard DOES perform better. Around 150MB/s.
 

Attachments

  • Screen Shot 2022-05-09 at 18.30.15.png
    Screen Shot 2022-05-09 at 18.30.15.png
    518.4 KB · Views: 79

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Next Round. RaidZ1 made from 4 Disks attached to the mainboard. Still 150MB/s - so, lets see what happens when I stripe it with a second vdev.
 

Attachments

  • Screen Shot 2022-05-09 at 19.07.39.png
    Screen Shot 2022-05-09 at 19.07.39.png
    527.3 KB · Views: 75

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Just to be precise… iperf Test:

[ ID] Interval Transfer Bandwidth
[ 1] 0.00-10.01 sec 2804 MBytes 2349 Mbits/sec

So the network runs adequately.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
So, let's put the controller to work. I configured a hardware Raid6 over all 8 disks. Let's see what the performance will be, once it finishes it's "resilvering".
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
So, let's put the controller to work. I configured a hardware Raid6 over all 8 disks. Let's see what the performance will be, once it finishes it's "resilvering".

Please don't do that. You'll lose data.

 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
*sigh* - ok. And if I try a plain old Linux Distro on top of that? I just need a fast-ish network connected Storage for ONE client.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
The controller runs in JBOD mode, maybe there is a HBA firmware for it? I'm really running out of ideas here. Performance via the on board controllers wasn't much better… new mainboard, maybe? Another controller card?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Striped SSDs connected to the motherboard SATA ports will always be the fastest option.
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
Surely, but I only need around 250MB/s… and I do have all those drives…
 

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
What do you know. Problem Solved. removed my TrueNAS core bootdrive and installs a fresh copy of TrueNAS scale. Badabing. One Pool with two RaidZ vdevs. I guess I'll some other configs, too. Rock solid 250MB/s.
 

Attachments

  • Screen Shot 2022-05-15 at 21.44.23.png
    Screen Shot 2022-05-15 at 21.44.23.png
    582.2 KB · Views: 67

JCBone

Dabbler
Joined
Aug 24, 2014
Messages
20
All the drives in one RaidZ2 video offers less write, but read is still ok. Is resilvering a 4 drive vdev too risky? I once heard you should user RaidZ2 with more than six drives… and mine are just 2TB per drive.
 

Attachments

  • Screen Shot 2022-05-15 at 22.00.37.png
    Screen Shot 2022-05-15 at 22.00.37.png
    560.1 KB · Views: 79
Top