Motherboard Choices for 10 Gen Intel for TrueNAS

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21

ASRock Rack > Z490D4U-2L2T

When reading over the specs. I saw this in the specs.
"*The M.2 slot (M2_2) is shared with the PCIE5 slot (BTO). When M2_2 is populated with a M.2 PCIe module, PCIE5 will be disabled"
I looked over the diagram for the motherboard to see where the PCIE5 is and I must be missing it, but can anyone tell me where it is.

Sorry if that looks like a stupid question.. Haven't used server motherboards before. :oops:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The thing to be concerned about here is that the RAID BIOS configuration for more recent cards has been integrated into the host BIOS,
Are you sure this isn't just a UEFI thing? Legacy BIOS didn't have an option to "extend" system setup menus from PCI expansion ROMs, but UEFI does. E.g. LSI SAS2/3 cards still take the legacy extension ROM and the UEFI extension ROM. The legacy extension ROM is, I guess, an independent real mode application like the firmware setup utility itself is. But the UEFI extension ROM adds its own menus to those present in the system setup utility (no doubt describing the menu structure and callback addresses in some byzantine UEFI way), and they end up looking a lot like standard menus from the system ROM.
If the configuration menus you're referring to are similar to this (shown here for a Dell HBA330 Mini, but the structure is identical to what I have at home with a Supermicro motherboard and an HBA with a SAS2008 - with AMI's look and feel instead of Dell's), then I suspect you've simply encountered UEFI option ROMs:

Entrypoint from the setup application into the device settings (notice the tip at the bottom, the HBA and NICs have expansion ROMs and the SSDs are probably being handled by a driver included in the system firmware):
1643727075220.png


Top-level menu for the LSI config utility (you can tell that the firmware has to go load different resources because the screen flashes for a second, and you get LSI's typical list of all LSI controllers of the supported generation):

1643727230186.png

If we go deeper, we get the usual LSI menus:
1643726923316.png

1643727314951.png



Equivalent on a Supermicro board with standard LSI SAS2 firmware on the card:

1643727645893.png

1643727685776.png

1643727716943.png

1643727762256.png
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
"PCIE5 slot (BTO)" Built-To-Order! You have to specifically order a board with this slot mounted to experience the limitation. As pictured, there's only an empty green space behind the M.2 socket where SLOT5 would be mounted.

Otherwise, a server board like this is just a regular Z490, but optimised for 24/7 operation, no overclocking and no care for "aesthetics" because it's bound to be locked away in a server room or a far-away data centre rather than adorn a fancy case with tempered glass panels.

If you don't mind the screw-in, finicky, drive trays (I certainly would!), the Define 7 is just fine for any "server" board that fits in.
The issue is more about reorganising drives—and, if possible, considering a safer raidz2 layout because raidz1 is actually not guaranteed to sustain the loss of one drive when using such large capacities. How much data is in there?

Read cache (L2ARC) would only be useful if you were repeatedly streming the same files over and over. I doubt that's a realistic use case. And then, L2ARC takes up some ARC space (i.e. RAM); with only 16 GB, a large L2ARC is actually hurting performance.
Best take out the cache, and keep two NVMe for boot and jail — separate, as wanted, or hacked into dual use
That will free up a useful SATA port for the data pool.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
When reading over the specs. I saw this in the specs.
"*The M.2 slot (M2_2) is shared with the PCIE5 slot (BTO). When M2_2 is populated with a M.2 PCIe module, PCIE5 will be disabled"
I looked over the diagram for the motherboard to see where the PCIE5 is and I must be missing it, but can anyone tell me where it is.

Sorry if that looks like a stupid question.. Haven't used server motherboards before.
Two things:
  1. Not at all a stupid question, more like an ASRock documentation bug
  2. It's not a server motherboard if it uses a Z*** PCH. That said, the model in question gets you as close as you can get without having ECC. It would fulfill your stated objective of keeping the RAM and i7.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
this isn't just a UEFI thing?

So far the only places where I've been encountering this are on vendor systems, which seem to be tightly locked down. The H740p seems to need Dell-specific stuff, which isn't that shocking since, well, y'know, the whole Lifecycle Manager and other integrations. This seems to be the experience elsewhere as well. The blurring of the lines makes it difficult to tell, I agree. Although getting stuff integrated as extensions into the main BIOS menu is a blessing.

Random tangent... Am I the only one disappointed that the 3108 is still the go-to RAID card after all these years? It's been in use since X9 days, the X9DRW-CTF31 has it as an AOM, and every generation including X12 now has offered it too.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Am I the only one disappointed that the 3108 is still the go-to RAID card after all these years?
I think it's a matter of cost/benefit. The more recent stuff adds little that's useful, as far as I can tell. The SAS35xx adds PCIe/NVMe, but you need a U.3 backplane and a crazy expensive tri-mode expander for it to be useful in a real application... And then you're oversubscribed immensely on the PCIe side.
The SAS39xx adds PCIe 4.0, which would be useful in the crazy-expensive setup with NVMe, but a lot less so in the standard SAS scenario.

Maybe if the expanders didn't cost an arm and a leg, and if AMD hadn't gone for 128 PCIe lanes for Epyc, we'd see more of them.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
If you don't mind the screw-in, finicky, drive trays (I certainly would!), the Define 7 is just fine for any "server" board that fits in.
I have the Define R6 and those tray are really not great in my opinion. Although my primary concern is not so much that replacing a drive with them is a bit fiddly; it is still better than the cases I have had before.

The problem with the trays are thermals, because the trays shield the drives so much from the air flow. I therefore added some pressure optimized fans from Noctua (NF-A14 PPC 3000 PWM), which manage to get my drives to around 32 C, but they are extremely loud.
 

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21
"PCIE5 slot (BTO)" Built-To-Order! You have to specifically order a board with this slot mounted to experience the limitation. As pictured, there's only an empty green space behind the M.2 socket where SLOT5 would be mounted.

Otherwise, a server board like this is just a regular Z490, but optimised for 24/7 operation, no overclocking and no care for "aesthetics" because it's bound to be locked away in a server room or a far-away data centre rather than adorn a fancy case with tempered glass panels.

If you don't mind the screw-in, finicky, drive trays (I certainly would!), the Define 7 is just fine for any "server" board that fits in.
The issue is more about reorganising drives—and, if possible, considering a safer raidz2 layout because raidz1 is actually not guaranteed to sustain the loss of one drive when using such large capacities. How much data is in there?

Read cache (L2ARC) would only be useful if you were repeatedly streming the same files over and over. I doubt that's a realistic use case. And then, L2ARC takes up some ARC space (i.e. RAM); with only 16 GB, a large L2ARC is actually hurting performance.
Best take out the cache, and keep two NVMe for boot and jail — separate, as wanted, or hacked into dual use
That will free up a useful SATA port for the data pool.
1643812656346.png

Here is print screen of the dashboard. The ASRock Rack > Z490D4U-2L2T looks to be the board to go with. (Just may need to start a gofundme page to get up the money for the parts I need lol ;)

So I'm thinking this should be the build I should work towards.

Fractal Design Define 7

ASRock Rack > Z490D4U-2L2T
Intel - Core i7-10700K
CORSAIR - VENGEANCE LPX 32GB (2 x 16GB) 3.2 GHz DDR4 C16 x2
Intel EXPI9404PT Ethernet PRO/1000 PCI-E PT Quad Port Server Adapter
LSI LSI00244 9201-16i PCI-Express 2.0 x8 SATA / SAS Host Bus Adapter Card
2x NVME WD 500 GB (1 for Boot and the other for the jail)
1x WD 18 TB (SHUCKED)
4x WD 14 TB (SHUCKED) [RAIDZ1]
3x WD 10 TB (SHUCKED) [RAIDZ1]
1x WD 3TB NAS
(Will work on getting more 14TB or 18TB hard drives to go to RaidZ2 though)
Noctua NF-A14 PWM chromax.black.swap, Premium Quiet Fan, 4-Pin (140mm, Black) x6 (3 in the front, 2 in the bottom (if I can fit them),1 in the rear)
**And for the CPU**
I'm stuck between going with the Noctua NH-D15 chromax.black, Dual-Tower CPU cooler (140mm, Black) and getting two more Noctua NF-A14 PWM chromax.black.swap or going AIO. I believe I can get a 280 AIO with the Server configuration, but need to stick with a 240. Kind tough when you not in it building the system over a few days.

Let me know what you guys think. Again thanks for all the support... :)
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
ASRock Rack > Z490D4U-2L2T
As you've seen from an above reaction, most here would go for W480 or a Supermicro board to have support for ECC, at least as a later upgrade. But this one should be fine, except for ECC support.
CORSAIR - VENGEANCE LPX 32GB (2 x 16GB) 3.2 GHz DDR4 C16 x2
Don't you already own 2*32GB?
Intel EXPI9404PT Ethernet PRO/1000 PCI-E PT Quad Port Server Adapter
Unless you're building a router, this one should be useless with 4 ports on-board, 2 of which 10 GbE!
LSI LSI00244 9201-16i PCI-Express 2.0 x8 SATA / SAS Host Bus Adapter Card
Does the job, but you may consider the PCIe 3.0 sucessor, the LSI 9300. Depending on how many drives you plan to have, it may not be necessary at all (the AsRockRack W480 board has 8 SATA, for instance), but the HBA may come handy when moving data to a new raidz2 pool—with every drive/vdev nearly full, it will be tricky.
Noctua NF-A14 PWM chromax.black.swap, Premium Quiet Fan, 4-Pin (140mm, Black) x6 (3 in the front, 2 in the bottom (if I can fit them),1 in the rear)
**And for the CPU**
I'm stuck between going with the Noctua NH-D15 chromax.black, Dual-Tower CPU cooler (140mm, Black) and getting two more Noctua NF-A14 PWM chromax.black.swap or going AIO. I believe I can get a 280 AIO with the Server configuration, but need to stick with a 240. Kind tough when you not in it building the system over a few days.
Keep it simple for the CPU cooler. But some considerations beyond pure function may be at play here… I would expect a NAS with so many drives to be tucked away from ears and eyes, so that glass panels and fan colours should NOT matter.
 

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21
As you've seen from an above reaction, most here would go for W480 or a Supermicro board to have support for ECC, at least as a later upgrade. But this one should be fine, except for ECC support.
The reason I chose this one is partly for the 2 x RJ45 10G base-T. Granted the other board has the support for the ECC and couple hundred dollars cheaper, but everywhere I look for it comes up "Out of stock" Hopefully by the time I can order the motherboard, it will come back in stock. When it comes to SuperMicro, I keep hearing that they are power hogs and I would like to keep it as efficient as possible :)
 

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21
Does the job, but you may consider the PCIe 3.0 sucessor, the LSI 9300. Depending on how many drives you plan to have, it may not be necessary at all (the AsRockRack W480 board has 8 SATA, for instance), but the HBA may come handy when moving data to a new raidz2 pool—with every drive/vdev nearly full, it will be tricky.
Have you seen one that will support up to 16 drives? All the LSI 9300 I have seen does 8. I want to keep the SATAs freed up so that I can use those for drive transfers when I upgrade the drives.
 

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21
Unless you're building a router, this one should be useless with 4 ports on-board, 2 of which 10 GbE!
I use it for the increase bandwidth for PLEX. I see an improvement when multiple 4K videos are being streamed in/out of the home network. The goal is to get where I use the 10GbE, but I will have to wait awhile due to money limitations.
 

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21
Keep it simple for the CPU cooler. But some considerations beyond pure function may be at play here… I would expect a NAS with so many drives to be tucked away from ears and eyes, so that glass panels and fan colours should NOT matter.
That's the plan.. I chose those fans so that I can keep every in black. I want to definitely get a positive airflow going for sure... As soon as I build my new desk, it will be on the far corner of the desk instead of below. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
this one is partly for the 2 x RJ45 10G base-T.

This is not highly recommended. You are much better off going with SFP+. The technology is better, more reliable, lower latency, less power-hungry, less subject to wiring issues, etc. I just finished writing a message to someone who probably spent a hundred bucks on a crappy 10G copper card and isn't having the best of luck with it. We have a nice 10 Gig Networking Primer for those unfamiliar with server-oriented high performance 10G networking...
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
As soon as I build my new desk, it will be on the far corner of the desk instead of below. :)
Are you saying that you want to place the NAS onto a desk, at which you will be sitting and working? If so, you should consider vibrations. Personally, I would be afraid to put unnecessary stress on the disks. And what when (not if) you hit your desk hard by accident?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Have you seen one that will support up to 16 drives? All the LSI 9300 I have seen does 8. I want to keep the SATAs freed up so that I can use those for drive transfers when I upgrade the drives.
There are 9300-16i cards, though prices may not be palatable. But 16 (HBA) + 6 (onboard) is more drives than the case can accommodate and would bring additional issues with power supply. 8+6 should be enough.
When it comes to SuperMicro, I keep hearing that they are power hogs and I would like to keep it as efficient as possible :)
That may be because the boards do not support S3 power state and do not sleep (even the workstation boards). But a NAS does not sleep, it idles.
 

rlentz2022

Dabbler
Joined
Jan 30, 2022
Messages
21
This is not highly recommended. You are much better off going with SFP+. The technology is better, more reliable, lower latency, less power-hungry, less subject to wiring issues, etc. I just finished writing a message to someone who probably spent a hundred bucks on a crappy 10G copper card and isn't having the best of luck with it. We have a nice 10 Gig Networking Primer for those unfamiliar with server-oriented high performance 10G networking...
I will look into this. Thanks!
 
Top