Anyone running TrueNAS with this? ASUS Hyper M.2 DO or DO NOT?

swezey

Dabbler
Joined
Feb 17, 2022
Messages
21
So I am still kind of in the planning stages of my first TrueNAS build. I actually have installed it and gotten it running in a test environment but I'm having second thoughts about spinners and am thinking maybe I should go SSD. I have a bunch of questions around that BUT... really what I am wondering first is if anyone is running the latest version of TrueNAS with the ASUS Hyper M.2 expansion card seen here:

https://www.asus.com/us/Motherboards-Components/Motherboards/Accessories/HYPER-M-2-X16-CARD-V2/

Or something similar?? I have never used these but the price point is such that you could make a monster fast storage array IF it works in TrueNAS (and I guess in your Mobo). Since I'm not 100% committed to the hardware yet I could make some choices if this is a thing that is doable and reliable. Honestly, I hadn't even considered any kind of SSD storage until a few days ago I am kinda old school I guess, but if this is viable I'll investigate further. Any experiences, good bad, etc are really welcomed and appreciated. Thanks everyone!

Bill
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It appears that the slot would need to support bifurcation, which is breaking up an x8 slot into 2 - x4 slots. Or a x16 slot into 4 - x4 slots. Without enough wide PCIe slots, AND bifurcation, this card is not all that useful.

For example, my AMD Epyc board has 4 - x16 slots. But, I don't know if it support bifurcation, and if so, which slot(s).

Their are other PCIe to 2x or 4x NVMe M.2 drive adapters that use PCIe switches. These are more expensive but, work better in some cases. For example, the card you list can't work with 4 M.2 drives in an x8 slot. But, a PCIe switch that did the correct thing, can work in x4, x8 or x16 lane slots for as many drives as the card has.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Or something similar??

Bearing in mind that the only value the ASUS card would seem to offer is active cooling, it's about double the price of the generic Shenzhen special. Which probably isn't terrible in many cases, but unnecessary in a typical server chassis.

A card such as this or the Shenzhen special does require bifurcation support, as @Arwen mentions. I feel like the Shenzhen special would be a better choice in a server chassis due to the smaller footprint -- server expansion space tends to be more constrained than gamer/enthusiast monster PC's.

The typical PLX switch chip cards are more expensive, $100-$200, such as the Supermicro one. These generally only need an x8 slot and most of them work in electrically-x4 slots.

For a ZFS pool, though, you also need to pay attention to the quality of the SSD. Buying bargain basement QLC SSD's is going to be disappointing.
 

swezey

Dabbler
Joined
Feb 17, 2022
Messages
21
Yes I had come across the "bifurcation issue" and frankly would prefer not to deal with that if possible. It seems like some minutiae that just could end up causing trouble. Thanks for the tip that the Supermicro would work as well as something like this I assume:


Although those are considerably more expensive than the ASUS or Shenzhen special. ;-)

So assuming I do go down this road, can I stuff 4 8TB sticks in one of these and TrueNAS sees it as what? A gigunda 32TB drive or 4 8 TB drives? And are there particular settings that should be used when setting up a pool with SSD versus spinners? Is RAID necessary? Does it even work? I have never done this before with SSD. And to your point @jgreco I will definitely NOT use consumer-level SSD!

And just for grins, could I stick 2 or 3 of these in my server (assuming I have the room) and make like a 100TB storage server? Not that we have that kind of money to spend but just could I? Thanks, guys!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I used an Asus M.2 Hyper thingie. I took it out and am now using what @jgreco describes as a shenzen special.
Why?
The shenzen special is shorter, which means it doesn't intrude into my SATA Ports (which made some cabling interesting). I then cabletied a 120/140 (can't remember) fan above the PCIe slots for extra airflow across the PCIe cards

Bifurcation is easy - the major difficulty that I found is trying to figure out which BIOS slot matches which PCIe slot (I ended up asking SMC) then you just turn on bifurcation on that slot plug in the shenzen card and off you go. ZFS sees each gum stick as a seperate drive.

So yes, depending on the number of lanes and PCIe slots you have you could fill the server with Shenzen (or expensive SMC Cards) and then fill them with 8TB SSD's
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
BIOS slot matches which PCIe slot (I ended up asking SMC)

Ahahaha thanks for that, I always wondered if I was just unusually dense. It's always possible to figure this out but I always seem to spend like ten minutes staring at it all trying to decipher what's on the board, what's in the manual, and what's in the BIOS (which seems to be catastrophically poorly designed). And I've done it lots of times. :smile:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
server expansion space tends to be more constrained than gamer/enthusiast monster PC's.
I, for one, enjoy destroying my back every time I pick up my Full ATX chassis full of junk. Back in the day, multiple graphics cards was cool as hell. Now it's SSDs everywhere and room for a 10 GbE NIC when I finally decide to get one (or five).

Back on topic for future readers, there are specific caveats to this sort of solution - Asus, Shenzhen Special, Supermicro, same thing applies to all:
  • I'm having a hard time finding info on AMD CPUs, so you'll have to do some research. Same goes for Intel platforms not listed here.
  • LGA 115x CPUs do not bifurcate down to x4/x4/x4/x4, only x8/x4/x4. These cards would still work with three SSDs (specific slots will depend on the motherboard and adapter card).
  • Higher-end CPUs, even as old as Xeon E5 v1/v2 do support x4/x4/x4/x4, but this was not often an option exposed to users (there were no NVMe SSDs around when Xeon E5 v1 was mainstream). Same goes for early in the Xeon E5 v3/v4 lifecycle, but some vendors exposed the option in later updates once it became clear that PCIe x4 SSDs were the future. For those that didn't... hacking the UEFI configuration space to flip the switch manually to x4/x4/x4/x4 is viable, but will not survive a firmware settings reset. Hacking a firmware image to set the default to the desired value is also possible, but carries additional risk.
    • Specifically, Xeon E5 v1/v2/v3/v4 all support:
      • PCIe root 0: x16, x8/x8, x8/x4/x4, x4/x4/x8, x4/x4/x4/x4
      • PCIe root 1 x16, x8/x8, x8/x4/x4, x4/x4/x8, x4/x4/x4/x4
      • PCIe root 2 x8, x4/x4
    • Xeon Scalable v1/v2 seem to keep the three-root configuration, but with 3x 16 lanes. I suspect root 2 also supports all combinations down to x4/x4/x4/x4
    • Xeon Scalable v3 has 64 lanes, could be as simple as a new root 3 that is as capable as the other 3. Can't find any details, though.
  • More recent systems should support this configuration option, dependent on hardware support.
 
Top