problem speeds 4x nvme m2 (Stripe) dell r630

tyger

Cadet
Joined
Feb 7, 2024
Messages
6
Good morning,

I have a Dell R630 server, which will be dedicated to my storage, on it I plan to connect my 2 Esxi nodes to do HA.

currently on my dell R630 (8 slots 2.5) I have a V2 hyper card with 4 m.2 ports on the dell the bifurcation mode is active, the 4 m2 gen 3 are well identified, I did a test with a Windows server on it, and a raid 0 via win no problem with reading speeds +/- 10gb. except when I do a stripe of the 4 via true nas, I obtain +/- 600Mb/s and 200 in writing (test via the shell with fio)
my disks are:
3x Samsung 970 evo
1x Samsung 980

Is there something I'm doing wrong? (I'm starting with truenas)

I'm on the "Core" version

I also specify, this is used for my business I have a 20 vm ene on it.

If I forgot something tell me :)

dell r630:

2x E5-2690 V3
64GB ddr4 ecc 2133
4x nvme
2x10gb sfp+

ps: I specify that the current disks are only for testing
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
Are you looking at sequential speeds on Windows and random speeds on Linux? My guess is the 600/200 is 4k random reads/writes, which sounds about right for consumer drives.

I think it would help to share screenshots of your results.
 

tyger

Cadet
Joined
Feb 7, 2024
Messages
6
what is the best way to test flow rates directly on the truenas server? Normally my NVMEs are 3500 read and 2500 write, so normally when combined I'll have to use more through my network than through the drives, other than that the performance is really lousy. I tested on a Windows server with the same configuration, I have good speeds with 0 ray via Windows but in stripe on Truenas, the speeds are absent
 
Last edited:

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
fio seems to be very popular for benchmarking io. I'm sure there are others with far more experience using it here than I am.

I'm still interested in seeing the results that you're comparing. I still feel like you're maybe comparing sequential read/write on Windows to low queue-depth random read/write on Linux.

If you look at this page, and compare to the next page -- you'll see that some SSDs have great sequential speeds but are terrible when it comes to random, and why it's important that you're comparing the same metrics with your Windows and Linux results:

Notice the read benchmarks for the 970 Evo. For random they are between 60-100MB/s and sequential ~2000MB/s. The write speeds don't match with what you're seeing, but can probably be disregarded since they are 'burst' writes and likely skewed by caching.
 
Last edited:

tyger

Cadet
Joined
Feb 7, 2024
Messages
6
1707429281674.png

the different tests (I tested with and without the hyper card, directly on the motherboard)
this storage will be used to hold the vm datastores so I would like as much speed as possible for my esxi hosts
if you have a more suitable configuration idea, otherwise, I'll think about all the possibilities :)
 

Attachments

  • nas-128k-256.png
    nas-128k-256.png
    456.6 KB · Views: 84
  • nas-128k-2048.png
    nas-128k-2048.png
    501.4 KB · Views: 81
  • nas-bs4k-2048M.png
    nas-bs4k-2048M.png
    170.5 KB · Views: 84

tyger

Cadet
Joined
Feb 7, 2024
Messages
6
update: I did transfer tests in SMB, I have 700MB/s (2x3500MB/s of the 2 nvme in raid) but 1MB/S in writing, any idea?
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
Thanks for posting your results.

What size are your SSDs? How much cache do they have? Is it possible the slow speeds are after the cache has been saturated? Notice that the larger your writes, the slower the results.

I don't understand your transfer test with SMB. You have 7000MB/s or 750MB/s? Is that reading or writing? There are 2 disks in the raid0, not 4? How big is the transfer?
 

tyger

Cadet
Joined
Feb 7, 2024
Messages
6
for the writing problems, I just did different tests, it's one of my fibers which is surely broken.

I'll come back with real tests when the hardware is in good condition :)
 

tyger

Cadet
Joined
Feb 7, 2024
Messages
6
hello, as promised a little update.
I replaced my switch with a mikrotik crs309, I get 8gb/s again in iperf and I barely get the gb/s in smb transfer.
I ask myself the question, because the speeds are still insufficient for what I want to do with it, has anyone already mounted 100GB/s cards in QSFP28 on a Dell R630?
 
Top