Does this build make sense?

1616995

Cadet
Joined
Feb 4, 2021
Messages
4
Motherboard: Supermicro X11SPI-TF
CPU: Intel Xeon Silver 4210
CPU Cooler: Noctua NH-U12S DX 3647
Network: Mellanox MC456A-ECAT 100gbe
Sata Expansion: LSI 9211 8i
Ram: Crucial 2 x 16GB 2133 ECC RDIMM
OS SSD: Samsung 970 Evo Plus 250GB
PSU: Corsair SF450
Storage SSD: Samsung 870 Evo (12 x 1TB in raid 6)
Fans: Noctua NF-A14 140mm PWM
Case: Fractal Meshify 2 XL

I'm new to NAS and haven't messed around with a server OS before. I haven't purchased anything yet and don't have an existing NAS at the moment. The use case is strictly for storage and fast transfer speeds between two work computers running Windows 10 Pro. They will be connected directly without a switch. No additional applications need to run on the NAS and no applications on the W10 machines need to access the NAS, (other than explorer for file transferring). Here's my reasoning/understanding for the components:

Fractal Meshify 2 XL
A server case will be too loud since it requires higher RPM from fans and I work right beside my computer. So a NAS build in a tower case is a compromise for noise. And this case can house 18 drives.

SSD
Higher quantities of small capacity drives seem to be cheaper than lower quantities of higher capacity drives since the two extra drives for parity will cost less at small capacities than higher. 12 x 1TB SATA SSDs vs 6 x 2TB SATA SSDs will result in a faster speed when striped with raid. HDDs will be slower. And NVMEs require special hardware for connectivity which runs the cost up too much for me.

Mellanox 100GBE NICS
100GBE is overkill and12 SSDs in Raid 6 should end up around 5,600MB/s or 5.6 GB/s. Realistically, I'll probably get half those speeds but incase it goes higher, a 50GBE NIC won't be the bottle neck and I can get the extra 600MB/s. Or higher if and when I expand the array later.

LSI 9211 8i
The Supermicro X11SPI-TF has 10 onboard sata connections, minus 1 when using an M.2, so the LSI card can give me more. I've read they need to be flashed into IT mode with an older firmware to work properly with TrueNAS. And they get very hot so I should use a PCI fan below to help keep it cool. And I saw on a thread here that splitting a raid array across onboard sata ports and LSI ports is fine.

Am I missing anything? Besides the LSI flashing, I'm hoping for a somewhat plug and play setup. So I wanted to ask if anyone sees any flaws with what I've spec'd out, and if there is any tricky setting up required to get the full bandwidth of SSDs vs just using HDDs. Is that CPU lackluster or overkill? Can I get away with less ram? Would a 50GBE NIC suffice if I can't expect the full 5.6gB/s speeds?

Thanks in advance.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
100 GbE and SATA storage looks like a mismatch. Xeon Silver for pure storage is overkill.
"Raid6" is not ZFS terminology.
Higher quantities of small capacity drives seem to be cheaper than lower quantities of higher capacity drives since the two extra drives for parity will cost less at small capacities than higher. 12 x 1TB SATA SSDs vs 6 x 2TB SATA SSDs will result in a faster speed when striped with raid.
I don't quite understand your calculation here, and a 12-wide vdev is a bit too wide.

If speed, and making use of 100 GbE with just two clients(!) is your priority, you should use mirrors—and NVMe. If capacity and cost are concerns, then I suggest to go for RAIDZ and review your calculations. SSD have much lower URE rates than HDD, which makes RAIDZ safe again.
 

1616995

Cadet
Joined
Feb 4, 2021
Messages
4
@Etorix
Okay. So my understanding was 12 drives x 560MB/s read speed in an array of RaidZ2, would give a read speed of potentially 5,600MB/s. That's why I was thinking more drives was better for speed, and a 50 or 100GBE NIC could let the 5,600MB/s come through.

I guess my priority isn't necessarily to fully saturate a 100GBE NIC, but to just have really reliable storage in RAIDZ or RAIDZ2, and from there making it as fast as possible with Sata SSDs. I'm flexible with the drive configuration but need 20TB that can later be expanded to 30TB next year.

I spent the last couple weeks figuring out hardware and generally what OS would be most suitable. I was recommended TrueNAS so I figured I could learn the TrueNAS details later, after building. I didn't even know what a vdev was before posting, I just wanted to know the hardware was ok. I'll start figuring out the TrueNAS configuration now.

So if I downgrade the CPU, do you think the components I listed in my first post will function smoothly with TrueNAS as long as I get the settings in TrueNAS correct? What number of drives would you recommend for a single vdev? Or should I do multiple vdevs? Should I get a slower NIC?

Thanks for your input.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
What are the use-case and budget?
 

1616995

Cadet
Joined
Feb 4, 2021
Messages
4
What are the use-case and budget?
Use case is strictly storage accessible from two Windows 10 PCs. 20TB of storage, hoping for fastest transfer speeds possible without NVMEs. Budget is 1,250 USD without drives. Budget is a bit flexible, but I'm trying to make it cheaper than a 12 bay synology.
 

ccssid

Explorer
Joined
Dec 12, 2016
Messages
86
You want to use 12 ssd @ 1TB, but, later you state 20 TB storage wanting to upgrade to 30 TB next year ???? Also, your rational for costing out your ssd''s (1TB vs 2TB), is not in line with what you would be paying for the Xeon and the Mellanox.
 
Last edited:

1616995

Cadet
Joined
Feb 4, 2021
Messages
4
Yeah, I messed that up when I was typing. I meant 12 x 2TB drives for 20TB of usable storage, and other two drives for parity. Sorry about that.

12 sata SSD drives with room for expansion. Regardless of the capacity of the drives, I'm mainly concerned right now with making sure the hardware I listed will work with TrueNAS, and I'd like to learn about what realistic performances I can expect. The drive capacity I think is a smaller detail, unless it ties into how TrueNAS functions.
 

ccssid

Explorer
Joined
Dec 12, 2016
Messages
86
Ok. Now that you are using strictly for storage, I would go with raid Z2. But you cannot just add drives to an exiting pool. Please read up on how this works prior to your ssd purchases.
 
Last edited:
Top