Have you purchased it yet..?
I'm going to bitch and moan a bit as a cautionary tale:
First, I had no problem getting 9GB/s with 4 of the drives I've started selling now! (hope I did the right thing) via a highpoint SSD7120 on an i7-8700k (no ECC and not many PCIe lanes, which is why I started looking to "better equipment" ... you know, "enterprise" equipment that's more performant. lol
But I seem to encounter a LOT of people who (though they won't explicitly say that TNS or TNC just doesn't easily perform well with SSD, wither SATA, SAS or NVMe ... until I ask questions. And only
then do I hear how "unreasonable my expectations are" that an ARRAY of drives in which each member is 2x - 3x faster than the HIGHEST SPEED I'VE EVER SEEN my spinning array get (1.2GB/s)... in which
4 of my NVMe (Micron 7300 Pro):
- Write: 2.2GB/s
- Read: 3.2GB/s
8 of my NVMe (Micron 9300 Pro):
- Write: 3.2GB/s
- Read: 3.4GB/s
Yet ... when R or W THE SAME EXACT files ... to and fro a peer with an SSD that even while pretty full (right now) gets
M1x MacBook Pro
- Write: 3.5GB/s
- Read: 4.5GB/s
... but still gets only 550MB/s ... whether it's to and from an array of:
- 4x 2.2GB/s W (7300 Pro) ...
- 8x 3.2GB/s W (9300 Pro) ...
The problem's really just may be my expectations.
The spinning array is different; it's an old ass T320 rockin an E5-2400 v2 with DDR3.
The NVMe machine isn't "great", but it's an R7415 with an Epyc CPU with 128 PCIe 3.0 lanes.*
(granted, those a$$hats at Dell kept 64 lanes to do absolutely NOTHING with; providing only 32 lanes to the 24 NVMe slots).
Still ... when splitting the 4 NVMe drives between the banks of 12 that each get 16 PCIe lanes (32 in this config).
If "lane limited" you'd expect a difference vs putting all 4x NVMe drives in the same bank which has only 16 lanes.
And YET, I still get 560MB/s using 4 NVMe drives (that each get 2.2GB/s) ... whether in the same bank or split up between the two.
Getting the same DOG____ performance whether they're all in the same bank or split to maximize the topology.
Hell ... had it given ANY sign of improvement, it would've satiated an indices prompting me to drop even more $$...
To buy either an R750 or R7525. Both actually provide 96 lanes to the 24 NVMe slots. But alas ... it's STILL no better.
So maybe it's not (just) Dell..?
Maybe it's not ZFS though ... bc when i tested with Ubuntu ..? I got appropriate single (NVMe) drive performance.
But when I created a RAID-5 from 3x NVMe drives ... it gets the same TNC perf +100MB/s R/W, probably from checksum overhead, etc.
Having only tested one drive in TNC, maybe it was a FreeBSD issue. So I installed TNS ... and still got ~500MB/s W and 600MB/s R.
So it cannot even get the performance in Ubuntu-Based ZFS (TNS) that a single NVMe drive gets [in] Ubuntu...wtf !?
So Dell are turds for selling a device claiming it supports 24 NVMe when it barely has the electrical bandwidth for 8.
But what is UP with TNC & TNS when it comes SATA SSD (tested with 6x Evo 870 that get R/W 500MBs each) which only get ~600MB/s in Rz1!?
What I'm saying is SATA SSD's don't get a FRACTION of the "percent of performance each spinning drive gets when in an array."
And NVMe drives seem to only get even less.
I'd imagine for small files..? (which I"ll test) they'll do an outstanding job "retaining" their performance.
But their bandwidth !? Of "like" data seems terrible so far. And I don't know what I should test next
I'd like to know what works and why without spending a fortune basically doing research companies should do.
Pending on how pervasive / persistent this is ..? Why didn't iXsystems mention this ..?
I HIGHLY DOUBT this doesn't also effect Optanes used for ZIL or L2arc.
How much do I have to spend to find out which drives (SATA/SAS/NVMe), controller or CPU (Xeon or Epyc) gives good value & performance?
My point..?
Don't count on easily / cheaply getting performance that intuitively corresponds to the specs mediating your purchase.
There's other BS going on that apparently people (who likely know about it) aren't talking about.
Hopefully I can find someone who's used an R7525 or an R750 with SSD drives to advice me.
Otherwise..? I'm gonna get a T630 with 32 SFF slots and use spinning drives ... bc so far this has been HORRIBLE.
Getting the same ~600MB/s Write whether I use 4 drives or 8 ..? Even if those 8 drives are themselves 150% the 4 drives performance?
It's like something's "ensuring" I don't break a speed limit.
I've tested w SFP28 switch & NICs, but again, I don't even THREATEN the 10Gb limit.
My Epyc CPU ..? Doesn't break 6%.
I've used FIO ... and reviewed the performance in ZFS performance metrics (the GUI report).
And when I ask for help?
I get chores (understandably at first) ...
But after doing them and showing abysmal performance ... only to get more chores?
Or I'm (thoughtlessly told) "it's the CPU" (when it's at 5% utilization and nominal temps..?)
Next chore: "Run mirrors"
...as if I couldn't just do that with spinning drives..? Or as if that doesn't negate buying performance drives in the first place?
Anyway, even after I did it didn't produce any new suggestions.
I KNOW there are smart people (probably who not only know my problem, but know how to fix it).
EricLowe and others are super smart.
But so far? I've received only chores.
The only good ideas? Came from me myself and I.
Comparing perf between Ubuntu vs TNS using similar configs or single drives.
Splitting drives between banks to see if more PCIe lanes helps at all.
My point..??
I HOPE you get better (inexpensive) results.
Me? I've spent too much on NVMe drives & machines ... only to get 7200rpm performance.
I really hope you're able to provide solutions I've failed to.