Testing a pcie storage card very thoroughly.

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
I'm going to be trying a pcie card which handles switching for me (non bifurcation board) and furthermore it's going to be off a pcie 3.0 extension cable.



I will gladly spend significant time testing reliability / latency before deploying.


I assume some kind of IOPS test between my various nvme drives and optane dubbed will be useful to confirm if my latency is better with optane (DESPITE requiring a card that handles switching!?)


Please, does anyone know which command to run, to test IOPS for say 15 minutes across 20gb file size?
I think I've done this before but it was years ago, so I forgot the command.
I figure more IOPS would imply lower latency regardless.






Secondly the reliability. I can wait weeks and weeks, what's the best way to basically "ping" the disks semi regularly, without wearing them out? Should I just create a pool and send a snapshot or something to it every hour?


Thank anyone please for
the help so much!

I can list the specific hardware if need be but I'm not sure it matters. The methodology does.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Secondly the reliability. I can wait weeks and weeks, what's the best way to basically "ping" the disks semi regularly, without wearing them out? Should I just create a pool and send a snapshot or something to it every hour?
I'm not sure there is a good way of doing this with SSDs. But: if you mostly want to test the PCIe switch, you could attach NICs instead to said switch and generate traffic to your heart's content.
May need additional NICs and adapters, though.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
When I say pcie switch, it's a quad nvme card. But expensive (has the switching processor) (ASM2824)

I can put in a heap of nvme 256 GB disk's and some optane ones. (4 of each)

I just wanna know this thing is quite Reliable
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
@ericlowe if you recall the command (fio?) to test latency I would love to hear it.
(And thank you)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
'fraid not, sorry.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
For latency testing you'll probably want to blast 4K IO with iodepth=1, something like

Code:
fio --filename=/mnt/poolname/datasetname/rwlatency --direct=1 --rw=randrw --bs=4k --ioengine=libaio --iodepth=1 --numjobs=1 --time_based --group_reporting --name=rwlatency-test-job --runtime=60 --eta-newline=1 --size=16G


More fun fio examples available here:

https://docs.oracle.com/en-us/iaas/Content/Block/References/samplefiocommandslinux.htm

A scrub will generate a good amount of I/O - if you're aiming to test general reliability of the silicon then the PCIe packets flying around should be enough, it won't necessarily need to be writes. Have the pool scrub itself constantly for a bit, check your dmesg for PCIe related errors.
 
Top