Recommendation PCIe card for NVMe SSDs for Supermicro X9SRi-F

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I am looking for a reliable card so that I can add at least 2 NVMe SSDs to my TrueNAS system, which is running on a Supermicro X9SRi-F board. If possible, though, I would prefer a card with at least 4 slots. The board has 1 slot with PCIe 3.0 x16 and 1 with PCIe 3.0 x8, but one will be needed for a separate SFP+ NIC soon.

The workload for those SSDs will be VMs from a 2-node cluster running XCP-NG (if that is relevant). This is in a small business commercial context, so reliability is critical.

In particular I am looking for personal experience from you guys.

Buying used from a reputable seller in Germany would be preferred, but I am absolutely open to other options.

Thanks!
 

mrpasc

Dabbler
Joined
Oct 10, 2020
Messages
42
Does this SM X9SRi-F offer bifurcation for the PCIE slot or are you looking for cards with PCIE switch chip?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
There was a somewhat non serious thread with @danb35 in which I listed a whole load of options in a somewhat less than serious manner. It was a list I got mostly from servethehome I think and the ubiquitious aliexpress

A board that needs bifurcation to work is about as simple a board as its possible to build - so I guess it really don't matter where it comes from. OTOH a board that works with a PCIe switch is a lot more expensive and complex


It all depends on wether the mainboard support bifurcation or not.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Thanks! It appears from the manual and various posts that the X9SRi-F does support bitfurcation. I will still have a look into my BIOS, but it seems some optimism is warranted ;-)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Have you purchased it yet..?

I'm going to bitch and moan a bit as a cautionary tale:

First, I had no problem getting 9GB/s with 4 of the drives I've started selling now! (hope I did the right thing) via a highpoint SSD7120 on an i7-8700k (no ECC and not many PCIe lanes, which is why I started looking to "better equipment" ... you know, "enterprise" equipment that's more performant. lol

But I seem to encounter a LOT of people who (though they won't explicitly say that TNS or TNC just doesn't easily perform well with SSD, wither SATA, SAS or NVMe ... until I ask questions. And only then do I hear how "unreasonable my expectations are" that an ARRAY of drives in which each member is 2x - 3x faster than the HIGHEST SPEED I'VE EVER SEEN my spinning array get (1.2GB/s)... in which

4 of my NVMe (Micron 7300 Pro):
- Write: 2.2GB/s
- Read: 3.2GB/s

8 of my NVMe (Micron 9300 Pro):
- Write: 3.2GB/s
- Read: 3.4GB/s

Yet ... when R or W THE SAME EXACT files ... to and fro a peer with an SSD that even while pretty full (right now) gets
M1x MacBook Pro
- Write: 3.5GB/s
- Read: 4.5GB/s

... but still gets only 550MB/s ... whether it's to and from an array of:
- 4x 2.2GB/s W (7300 Pro) ...
- 8x 3.2GB/s W (9300 Pro) ...

The problem's really just may be my expectations.
The spinning array is different; it's an old ass T320 rockin an E5-2400 v2 with DDR3.
The NVMe machine isn't "great", but it's an R7415 with an Epyc CPU with 128 PCIe 3.0 lanes.*

(granted, those a$$hats at Dell kept 64 lanes to do absolutely NOTHING with; providing only 32 lanes to the 24 NVMe slots).

Still ... when splitting the 4 NVMe drives between the banks of 12 that each get 16 PCIe lanes (32 in this config).
If "lane limited" you'd expect a difference vs putting all 4x NVMe drives in the same bank which has only 16 lanes.
And YET, I still get 560MB/s using 4 NVMe drives (that each get 2.2GB/s) ... whether in the same bank or split up between the two.
Getting the same DOG____ performance whether they're all in the same bank or split to maximize the topology.

Hell ... had it given ANY sign of improvement, it would've satiated an indices prompting me to drop even more $$...
To buy either an R750 or R7525. Both actually provide 96 lanes to the 24 NVMe slots. But alas ... it's STILL no better.

So maybe it's not (just) Dell..?

Maybe it's not ZFS though ... bc when i tested with Ubuntu ..? I got appropriate single (NVMe) drive performance.
But when I created a RAID-5 from 3x NVMe drives ... it gets the same TNC perf +100MB/s R/W, probably from checksum overhead, etc.

Having only tested one drive in TNC, maybe it was a FreeBSD issue. So I installed TNS ... and still got ~500MB/s W and 600MB/s R.

So it cannot even get the performance in Ubuntu-Based ZFS (TNS) that a single NVMe drive gets [in] Ubuntu...wtf !?
So Dell are turds for selling a device claiming it supports 24 NVMe when it barely has the electrical bandwidth for 8.
But what is UP with TNC & TNS when it comes SATA SSD (tested with 6x Evo 870 that get R/W 500MBs each) which only get ~600MB/s in Rz1!?

What I'm saying is SATA SSD's don't get a FRACTION of the "percent of performance each spinning drive gets when in an array."
And NVMe drives seem to only get even less.
I'd imagine for small files..? (which I"ll test) they'll do an outstanding job "retaining" their performance.
But their bandwidth !? Of "like" data seems terrible so far. And I don't know what I should test next
I'd like to know what works and why without spending a fortune basically doing research companies should do.
Pending on how pervasive / persistent this is ..? Why didn't iXsystems mention this ..?
I HIGHLY DOUBT this doesn't also effect Optanes used for ZIL or L2arc.
How much do I have to spend to find out which drives (SATA/SAS/NVMe), controller or CPU (Xeon or Epyc) gives good value & performance?


My point..?
Don't count on easily / cheaply getting performance that intuitively corresponds to the specs mediating your purchase.
There's other BS going on that apparently people (who likely know about it) aren't talking about.

Hopefully I can find someone who's used an R7525 or an R750 with SSD drives to advice me.

Otherwise..? I'm gonna get a T630 with 32 SFF slots and use spinning drives ... bc so far this has been HORRIBLE.

Getting the same ~600MB/s Write whether I use 4 drives or 8 ..? Even if those 8 drives are themselves 150% the 4 drives performance?

It's like something's "ensuring" I don't break a speed limit.

I've tested w SFP28 switch & NICs, but again, I don't even THREATEN the 10Gb limit.

My Epyc CPU ..? Doesn't break 6%.
I've used FIO ... and reviewed the performance in ZFS performance metrics (the GUI report).

And when I ask for help?
I get chores (understandably at first) ...
But after doing them and showing abysmal performance ... only to get more chores?
Or I'm (thoughtlessly told) "it's the CPU" (when it's at 5% utilization and nominal temps..?)

Next chore: "Run mirrors"
...as if I couldn't just do that with spinning drives..? Or as if that doesn't negate buying performance drives in the first place?
Anyway, even after I did it didn't produce any new suggestions.

I KNOW there are smart people (probably who not only know my problem, but know how to fix it).
EricLowe and others are super smart.
But so far? I've received only chores.
The only good ideas? Came from me myself and I. :cool:
Comparing perf between Ubuntu vs TNS using similar configs or single drives.
Splitting drives between banks to see if more PCIe lanes helps at all.

My point..??
I HOPE you get better (inexpensive) results.
Me? I've spent too much on NVMe drives & machines ... only to get 7200rpm performance.

I really hope you're able to provide solutions I've failed to.
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
I have an X9SCM-F board with a SYBA SI-PEX40110 M.2 M-Key NVMe PCI-e to PCI-e x4 converter card that has been running reliably since 2017. You cannot boot from it, but it supports data just fine.

This particular board supports only one NVMe SSD drive, but they have other boards in the family that support more drives. The SYBA brand is sometimes sold under the I/O Crest brand.

Good luck with your search.
 

John Anders

Cadet
Joined
Nov 10, 2023
Messages
1
Thanks! It appears from the manual and various posts that the X9SRi-F does support bitfurcation. I will still have a look into my BIOS, but it seems some optimism is warranted ;-)
Have you got a card working?

I got the same motherboard and have tried this card but with no luck the first try.

Tried with 4x4x4x4 bifurcation.

U.2 nVME

I am totally new to this type of hardware and think I am missing some important information on how to get this to work.

I am using Proxmox and not TrueNAS, but could not find much on this motherboard anywhere else.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I use the latest BIOS and an Asus Hyper M.2 X16 Card V2 with 2 Samsung EVO 970 Plus with 4x4x4x4 bitfurcation. Only light test use so far, bit it seems to be working in general.
 

Smeden

Cadet
Joined
Feb 13, 2024
Messages
2
I use the latest BIOS and an Asus Hyper M.2 X16 Card V2 with 2 Samsung EVO 970 Plus with 4x4x4x4 bitfurcation. Only light test use so far, bit it seems to be working in general.
Great work... I will try a similar card on my X9sri-f :smile:

I assume the Asus Hyper card is a pure 4x4x4x4 interface without controller?
Are you able to boot from the attached nvme's ?

Best regards Niels
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I assume the Asus Hyper card is a pure 4x4x4x4 interface without controller?
Yes.
Are you able to boot from the attached nvme's ?
That's entirely dependent on the system firmware being able to boot from NVMe devices. That's not a given for an X9 board, but it should be possible to modify the firmware image to include a suitable driver.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Are you able to boot from the attached nvme's ?
That was never my use-case. I have been using a pair of small SATA SSDs for booting (ZFS mirror for the config) for a few years now. The NVMe SSDs are a recent addition and in a mirror for running VMs on my XCP-ng box.
 

Smeden

Cadet
Joined
Feb 13, 2024
Messages
2
Yes.

That's entirely dependent on the system firmware being able to boot from NVMe devices. That's not a given for an X9 board, but it should be possible to modify the firmware image to include a suitable driver.
Thanks to both of you. Firmware modification with injected drivers might be possible.

Only a very few LGA2011 v1 & v2 systems supports nvme boot since this requires either an interface card with OPROM or native support in firmware.
To make things difficult most "older" boards with server chipsets has closed/protected firmware that cannot be tampered with. :confused:

I recently found a thread on ServeTheHome forum where a person hands out firmware mods that should enable nvme boot on the X9SRI-F and many other Supermicro boards. :smile:
( https://forums.servethehome.com/ind...t-with-supermicro-x9da7-x9dri-f.13245/page-11 )

Asked him for a copy of this FW today.

I have a handfull of NOS X9SRI motherboards equipped with E5-2650L and will try to create a small ProxMox cluster.
When I get the modified firmware I'll give it a shot and post an update about it here. :smile:
 
Top