Is a Dual Processor E5-2600v3 or v4 adequate PCIe lanes (80) support ~12 NVMe SSDs..?

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I asked a more explicit version of this question but it probably gives too much information. So -- here's that same question after a diet:

Dell PowerEdge R730XD
2P -- E5-2600v3 or v4
10x - 12x NVMe ( x4) drives...

Which has two candidate layouts either...

Option A -- PCIe 3.0 Slots:
  • (1x) x16
  • (6x) x8
Option B -- PCIe 3.0 Slots:
  • (2x) x16
  • (5x) x8

And even if it's the version-A (1x 16-lane)
1x 16-lane card + 3x 8-lane cards is 40-lanes ... and 10x NVMe drives. ¯\_(ツ)_/¯

...and I'd still have other x8 slots for AIC cards for a mirrored Fusion Pool (with Optanes) and NV-DIMMs for a SLOG I think ... No..?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I seem to remember a rather recent thread, which basically says that this generation of Dell server is not suitable for that number of NVMe SSDs
 

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
You should check manuals to validate that Splitting PCIe Lanes is really possible (1) at all, (2) partly or (3) for all slots.

Keyword is bifurcation, which means splitting PCIe lanes without additional hardware like PLX chips.
This usually requires suport by CPU (which E5-2600v3/v4 to some extent) and support by BIOS/EUFI.

Edit:
Did some digging and Intels documentation of E5-2600v3 and Dells Docs about RX730xd show that all PCIe slots can be splitted down to x4 links, which is great. :grin:

Edit2:
Cheapest base for PCIe x16 Slots > x4x4x4x4 with NVMe up to 22110 is probably hyper-m-2-x16-card-v2 : https://www.asus.com/motherboards-components/motherboards/accessories/hyper-m-2-x16-card-v2/
Cheapest base for PCIe x8 Slots > x4x4 with NVMe up to 22110 is probably AOC-SLG3-2M2 : https://www.supermicro.com/en/products/accessories/addon/AOC-SLG3-2M2.php

Edit3:
As my last attachment of R730(xd) manual indicates, if I am not miscounting, up to 72 PCIe lanes are connected to PCIe slots.
 

Attachments

  • 2021-08-26 23_09_02-Intel® Xeon® Processor E5-1600 _ 2400 _ 2600 _ 4600 v3 Product Families Da...png
    2021-08-26 23_09_02-Intel® Xeon® Processor E5-1600 _ 2400 _ 2600 _ 4600 v3 Product Families Da...png
    69.9 KB · Views: 202
  • 2021-08-26 23_14_31-Microsoft Word - poweredge-r730xd_owners-manual_en-us_180912.docx.png
    2021-08-26 23_14_31-Microsoft Word - poweredge-r730xd_owners-manual_en-us_180912.docx.png
    114.5 KB · Views: 196
  • 2021-08-26 23_43_45-Dell PowerEdge R720 and R720xd Technical Guide.png
    2021-08-26 23_43_45-Dell PowerEdge R720 and R720xd Technical Guide.png
    78.2 KB · Views: 209
Last edited:

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
You should check manuals to validate that Splitting PCIe Lanes is really possible (1) at all, (2) partly or (3) for all slots.

Keyword is bifurcation, which means splitting PCIe lanes without additional hardware like PLX chips.
This usually requires suport by CPU (which E5-2600v3/v4 to some extent) and support by BIOS/EUFI.

Edit:
Did some digging and Intels documentation of E5-2600v3 and Dells Docs about RX730xd show that all PCIe slots can be splitted down to x4 links, which is great. :grin:

Edit2:
Cheapest base for PCIe x16 Slots > x4x4x4x4 with NVMe up to 22110 is probably hyper-m-2-x16-card-v2 : https://www.asus.com/motherboards-components/motherboards/accessories/hyper-m-2-x16-card-v2/
Cheapest base for PCIe x8 Slots > x4x4 with NVMe up to 22110 is probably AOC-SLG3-2M2 : https://www.supermicro.com/en/products/accessories/addon/AOC-SLG3-2M2.php

Edit3:
As my last attachment of R730(xd) manual indicates, if I am not miscounting, up to 72 PCIe lanes are connected to PCIe slots.


WOW, THANK YOU! That was very very generous of you to provide that research.

I'll dig in to that ... and, that was precisely along the lines of my thinking.

Basically, those who'd mentioned "R730 not compatible" were parroting but not being explicit -- this way, if someone else wants to know the REAL REASON ...

IT'S NOT THE HARDWARE / TECHNOLOGY / BANDWIDTH:

As you pointed out -- it's perfectly doable ... just NOT WITH A BACKPLANE. Bc, either intel limited dells options, or dell did, or whatever ... but, NO BACKPLANE which provides access to the amount of NVMe devices the MB + CPU could support EXISTS -- except on other models (R740 or R7415 / R7425) ...

So unless they screwed up and allowed interchangeable backplanes BETWEEN (even recent) generations ...

I'd be limited to either M.2, or AIC ... though, there may be a modicum of hope
(though not hot swappable) for the back slots. :)


If I'm wrong -- or am still missing the point, I'm always grateful to update my concepts.
LMK if you see any rationale that's fallacious, etc.

Thanks!!
 
Top