Intel 11th Gen w NVME RAID5 question

joshuaboyd

Cadet
Joined
Apr 6, 2021
Messages
1
Hello, I'm wanting to build a new server that will serve as a Plex server, among other things. I'm wanting to use the new Intel I7-11700K CPU that I just got, and I would like to have 12-16 NVME SSD's in a RAID5/6 for my storage array. I'm running into a few challenges. I can't seem to find a chassis and RAID card that can pull this off. Any suggestions? I know this is an overkill, but I have a Tim Taylor approach to computing :D
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
RAID card that can pull this off.
You won't be using a RAID card if you're using TrueNAS. ZFS is software RAID. You should look for HBA cards (from LSI or a next tier vendor which uses the LSI chip... see the hardware recommendations about HBA in the resources section).

TrueNAS should support that hardware though.
 

dareinelt

Cadet
Joined
Mar 20, 2021
Messages
3
12-16 NVMe devices means 48-64 PCIe-Lanes. The i7-11700K delivers 20 of them. So not possible. The i7-11700K has no ECC-support, too.

NVMe-SSDs are living directly on the PCIe-Bus. That's there purpose because og lower latency. There are no RAID-cards yet.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
AMD Epycs can do up to 128 lanes of PCIe. For example, my new NAS board that uses an AMD Epyc, has 4 x 16 lane, PCIe 4.x slots. That is up to 16 NVMe without PCIe switches. (Plus, it has 2 x M.2 slots, and other I/O) Of course, AMD Epyc, boards and CPUs, are not cheap...

There are some LSI HBAs that support NVMe, but I have not looked too closely at them. They would likely act as a PCIe switch, so their only real advantage is that you could also support SAS or SATA drives too.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
12-16 NVMe devices means 48-64 PCIe-Lanes. The i7-11700K delivers 20 of them. So not possible. The i7-11700K has no ECC-support, too.

NVMe-SSDs are living directly on the PCIe-Bus. That's there purpose because og lower latency. There are no RAID-cards yet.

All of this simply isn't true.

1) A PCIe PLX switch is the ideal option to expand PCIe lanes, because the truth is that you don't need to be able to drive all NVMe devices at full speed all the time. Your NAS is going to be speed-limited by the network anyways.

2) Bifurcation means that you can drive 12-16 NVMe devices with x1 lanes, obviously limiting you to the x1 lane speed, which is probably still just fine for a NAS.

3) LSI does indeed have RAID cards that will handle NVMe, although I would say suboptimally. Look at the 9400 series. The 9400-16i can connect up to 24 NVMe devices into an x8 slot.
 

dareinelt

Cadet
Joined
Mar 20, 2021
Messages
3
When you use switches or some other creepthings you could simply stay with SAS-SSDs. The latency at the end would be equal.

Show me a mainboar dthat supports bifurcation down to single lanes. Lowest I've seen in practical usage is x4 as lowest part of a bunch of more lanes.

As the two points before: Everything sounds great in the theory. Are there some cables on the market yet for supporting more than one NVMe-device at the 9400? The last time I communicated with Broadcom their answer was, that they don't have a market and won't get them into production...

Keep in mind: Not everyone fresh registered with a low count of postings is a DAU...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
When you use switches or some other creepthings you could simply stay with SAS-SSDs. The latency at the end would be equal.

How so? A PLX switch is going to be incredibly fast. They are designed to operate at PCIe speeds. A single PCIe 4.0 lane is 16GT/s, or 2GBytes/sec, whereas SAS is 12Gbps, or about 1GByte/sec. Additionally, for the SAS controller, your data must flow over the PCIe bus to the RAID CPU, where it is deserialized, interpreted/processed on a PPC CPU, assigned to an SAS port, reserialized onto SAS, deserialized on the SSD, and THEN interpreted by the SSD controller. With the PLX switch, your data flows over the PCIe bus to the PLX, is cut-thru switched over to the PCIe lanes going to the NVMe SSD, and arrives at the SSD controller virtually instantly. There is none of the extra deserialization/CPU-massaging/reserialization-to-SAS of a RAID controller.

Help me understand your latency-would-be-equal claim. Where does the latency in the PLX switch model come in?

Show me a mainboar dthat supports bifurcation down to single lanes. Lowest I've seen in practical usage is x4 as lowest part of a bunch of more lanes.

So you've moved the goalposts. Having originally said "not possible," and being shown to be wrong, you are now saying that no readily available mainboard does this. That might be true, but it certainly IS possible for such a thing to be supported.

As the two points before: Everything sounds great in the theory. Are there some cables on the market yet for supporting more than one NVMe-device at the 9400? The last time I communicated with Broadcom their answer was, that they don't have a market and won't get them into production...

I would imagine so. That makes perfect sense, as their target market is doubtlessly high end server vendors who would be making 24 drives available on the front of their chassis from a backplane, and would be able to handle ordering the cabling from a custom fab. Dell, HP, and Supermicro all do things like this.

Keep in mind: Not everyone fresh registered with a low count of postings is a DAU...

Well, yes, I would hope that you are not a Disk Array Unit. A Disk Array Unit capable of logging into a forum and posting stuff would be a terrifying development, "Skynet woke up on Thursday the 9th of April, 2021 ..."
 
Top