New build advice: Microserver Gen 10+

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Yes I think even with a pcie switch card you need bifurcation if the card has 8 host lanes..
Bifurcation isn't required for cards like this - the PLX chip on the board will claim all of the host PCIe lanes for itself, and play the role of a switch between the host and the downstream devices.
 

shanemikel

Dabbler
Joined
Feb 8, 2022
Messages
49
I stand corrected.

But the gen10 doesn’t support “dual bifurcation,” so I believe PLX is required for a card with 4 nvmes on this system. Is that right?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
I'm not clear what "dual bifurcation" refers to - I've usually seen bifurcation written as a concatenation of the lane breakdowns, so an x16 slot breaking down to 2 or 4 would be listed as "x8x8" or "x4x4x4x4" - similarly, an x8 slot would be "x4x4" - they usually don't bifurcate down further than that, with the exception of laptop vendors doing their M.2 slots as x2x2 to support the Optane H-series cards.

But yes, in a system that only supports x8x8, you'd need to set the host port to x16 and use a PLX chip to get four NVMe devices at x4 each in the slot. Putting a non-PLX card might result in 2/4 cards working, or a very confused host system as it would see two devices physically inserted in each x8 "slot" and probably refuse to even complete the POST.

There's also some slots that break down heterogenously, such as x8x4x4, which lets you insert such wild devices as this one that jams two M.2 2280 NVMe SSDs and a low-profile x8 slot (for an HBA or NIC) into a single PCIe slot, while lining up the top of the LP bracket with the regular-height screw hole.

Low_Profile_x8xM2xM2_rev2_photo_2048x2048.jpg


(I have zero affiliation and have never purchased this card - other users here have, and I really admire the engineering.)
 

shanemikel

Dabbler
Joined
Feb 8, 2022
Messages
49
Wow. That is wild. Yes I think the “bifurcation” setting in HPE bios means x8x8 and “dual bifurcation” means x4x4x4x4.

For reference, this is the QNAP card: https://www.qnap.com/en/product/qm2-4p-384/specs/hardware
1666170123248.png


The other qnap card with 10Gbe and 2x4 M.2 NVMe (w/ PLX) has this problem, along with all of the other half-height vendor options for 4x4 M.2 NVMe on PLX behind x8 host lanes.

If you have full height slots and want 4x4 M.2 NVMe with PLX (and full x16 host bandwidth), there is this one: https://www.sonnettech.com/product/m2-4x4-pcie-card/overview.html
 
Last edited:

blacksteel75

Dabbler
Joined
Feb 26, 2019
Messages
28
Hi luckyluke699,
Nice decision to use TrueNAS Core for home DC.
Seems like good config.Godspeed and good luck!
But if you ask for advises, here's my 50c :smile:

SPEC/UPGRADES:

- Intel i3-9100F CPU.
  • Good! I'll still go for E-2278GEL or any (G/GE) as they have 8 cores - preferable if you plan to use deduplication, disc encryption and virtualization. i7-9700F is also good option.
- 64GB (2x32GB) Crucial 2666Mhz RAM (model: CT32G4RFD4266).
  • 64GB are useful If you plan to run more than 8-10 general VM or jails. (also in this case consider to go for 8 cores)
    If you dont plan more than 5 VM/jails - 32GB will suffice.
- HP ILO Enablement card
  • This is a must (unless you are a bit masochistic):smile: (don't ask me why)
- 4x older 3TB 3.5 HDDs (current). Will upgrade to 4x Western Digital 10-12TBs when funds allow. Probably shucked from WD Elements or WD Black D10.
  • Even with the 4x3TB you are good to go. But if they are more than 4-5y old, then planned upgrade with 12TB drives is good idea. They will be flawless for next 7-10 years. Avoid WD.
    (Some of older 1.5-3TB non-enterprise HDD's had higher percent of failure near their EOL)
ADDITIONAL POINTS:
- Stability is my highest priority, at an affordable price point.
  • I have run my HP Microserver Gen8 and HP DL360e Gen8 for more than 4 years with no issue on hardware/software level in mildly harsh environment. Stability of shouldn't be your concern.
QUESTIONS (AND ASSUMPTIONS/OPTIONS)?
  • Given my use NAS usage, spec and limited ports (listed above), I am guessing I would not 'require' (or would see minimal benefit from) a dedicated cache SSD drive?
    • An M2 SSD or PCI NVMe will be still useful for fast boot and running any jail or VM.
  • On my previous N54L, I used a separate drive for jails/plugins. I'm presuming fitting 2x drives (boot & jail/plugins) in addition to storage array is the best way to achieve the use case goals stated above?
    • No point to do this unless you will use highly intensive I/O jail/plugin/vm. Single SSD/NVME will suffice.
  • Official guidance suggests to avoid partitioning a single NVMe/SSD driveand using for both boot & jails, but I'm unsure why (layman's terms)? Should I avoid like the plague?
    • The most of the modern SSD/NVMe use special technique to keep the utilisation of flash cells uniform. This includes algorithm which checks how often the certain cell/zone is used and periodically move it around to less often addressed spaces.
      Breaking the SSD/NVMe to partitions affects this algorithm as indeed you try to put limitations where to operate.
      Better to avoid this, but is not critical as MTBF already get close to the range of regular HDDs.
  • Long-shot: If anyone can confirm a know working (on the Gen 10+) affordable (sub £100) PCIE gen 3.0 cards which support 2x NVME/SSE drives, and can utilise the motherboards x8x8 bifurcation (not x4x4x4x4 as appears common) please let me know. This would be the ideal solution.
  • Else other options include:
  • Internal USB 2.0 port + with USB 22x42 M.2 enclosed drive (not usb stick) e.g. this one from Zomy or ElecGear. I presume USB 2.0 would be far too slow for jails/plugins? But would TrueNAS run fast enough on it? Or would I notice performance issues?
    • Boot and install on USB 2.0 will be slow (tried it). After install TrueNAS will work well but if you use 10GB network (as I do) may harm the performance a bit.
  • External USB 3.0 port with a portable 2TB 2.5" hard drive, SABnzbd could run on if needed. Though I'd prefer an entirely 'internal' solution if possible.
    • I'll suggest to avoid external drives
  • A simple riser card such as these from StarTech or Sabrent, with a single NVME drive attached. Best used for jails or boot?
    • Both (as I have stated previously)
  • Any other advice or suggestions around how best to set this up.
    • Use 2x 10GB fiber (if your network support it) and put M2 SSD (I think the MB was having slots onboard)
For the folks that have used a USB-to-SATA adapter for the boot drive on these units, where are you mounting the SATA drive? I have a 2.5" SATA and I can't find space inside the enclosure for this?
 

luckyluke699

Dabbler
Joined
Mar 30, 2016
Messages
18
I used a USB to mSATA adapter, so it just pokes up directly from the USB slot itself. I have to remember to remove it before I slide the chassis out (else it catches) but other than that it works absolutely fine...

Your other option is external, but internals is probably safer (assuming you're happy with it being USB 2.0 internally, which for boot drive seems fine to me).
1700661723154.png
 

blacksteel75

Dabbler
Joined
Feb 26, 2019
Messages
28
I used a USB to mSATA adapter, so it just pokes up directly from the USB slot itself. I have to remember to remove it before I slide the chassis out (else it catches) but other than that it works absolutely fine...

Your other option is external, but internals is probably safer (assuming you're happy with it being USB 2.0 internally, which for boot drive seems fine to me).
View attachment 72678
Cool. From this thread and other posts about the HP Gen10 Microserver v2, I got the impression that someone had been able to fit a 2.5" SATA SSD drive into the enclosure with a USB-to-SATA cable to connect it to the internal USB port. I think that's impossible now looking at the space inside of the box...
 
Top