nvme missing interrupt

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Oops - now you got me :wink: What's VDI? Or IPOS?

We run Windows and Linux based applications that we cannot run natively in our FreeBSD jail based infrastructure. Like the Windows applications our accountant uses, SOGo, because we could not get the FreeBSD port to run, the forums were not much of help, and the Linux .deb packages on Ubuntu ran out of the box. Our poudriere runs in a VM on one of the boxes, because they are just the fastest systems we have around so we have some spare cycles to burn. Then we have our central keycloak - again easier and better supported on Linux. The latest and greates ELK stack, so again - Linux. Stuff like that.

Everything that is classical "LAMP/FAMP" based and runs painlessly on FreeBSD is run in jails.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
What's VDI? Or IPOS?
VDI would be Virtual Desktop Infrastructure - eg: VMware Horizon or Citrix XenDesktop. Desktop-As-A-Service, if you will. Typically very hungry for I/O but ZFS tends to be able to handle it well, your parent/template image ends up effectively residing 100% in RAM.

IPOS I think is just a typo on "IOPS"
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
I wish I'd seen this earlier. We had this exact issue with a PE R740xd and 16 NVME U.2 drives. The fix was an upgrade to 12.0.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I actually looked for the list of NVMe drives and the x2 drives are smaller optane drives (M2), I could not find any enterprise 2.5" drives that were x2. I will continue to work with the Supermicro support to figure out if its possible to do what you suggested (meaning somehow let some drives to switch to x2).

You could've (yes, 10 months ago, sorry) -- had you known in the beginning the issue with insufficient lanes ... have used SAS-3 (12Gb) SSD drives... I'm assuming the real issue is IOPS, which, if it's for things that get down to the 4K file sizes ... may have outperformed the max NVMe drives of 16x ... as I'm thinking the other x16 lanes on your motherboard (and CPUs) are used for the PLX

Anyway -- I have an R730xd ... and everyone who's responded to my posts about a config with 8-12 NVMe SSDs has emphatically told me it CANNOT support more than 4 NVMe x4 drives ... and yes, it's the same Dual E5-2600v3 or v4 CPUs as this machine of yours.

Just cannot possibly use more than 4x NVMe drives. Just no way. It's impossible.
It's bc of the PLX. It's bc the lanes are in use. It's bc. It's bc. Bc Intel KNOWS. Bc Dell knows. They'd never lie, downplay or push other products.

(like the way the "Nehalem Mac Pro cannot run Thunderbolt bc TB "is a feature integrated in to the CPU" ... even though it DOES work).

Of course -- until you see another company that's (also lying / wrong / in your case) saying it "supports 24x NVMe drives. lol

Anyway, I'd REALLY love to see the kind of benchmarks you've gotten (if it's not impossible / a pain to get that info posted).

But really, thanks for this info. I knew it'd work; I just have to figure out how to CONNECT them (and probably can't use standard connectivity. lol).
Maybe there's a product I can use from another generation that'll allow me to connect 8 NVMe U.2 drives.
I just wish companies offered the ability to mix SFF and LFF drives instead of having to be homogenously one or the other (usually).
 
Last edited:
Top