1616995
Cadet
- Joined
- Feb 4, 2021
- Messages
- 4
Motherboard: Supermicro X11SPI-TF
CPU: Intel Xeon Silver 4210
CPU Cooler: Noctua NH-U12S DX 3647
Network: Mellanox MC456A-ECAT 100gbe
Sata Expansion: LSI 9211 8i
Ram: Crucial 2 x 16GB 2133 ECC RDIMM
OS SSD: Samsung 970 Evo Plus 250GB
PSU: Corsair SF450
Storage SSD: Samsung 870 Evo (12 x 1TB in raid 6)
Fans: Noctua NF-A14 140mm PWM
Case: Fractal Meshify 2 XL
I'm new to NAS and haven't messed around with a server OS before. I haven't purchased anything yet and don't have an existing NAS at the moment. The use case is strictly for storage and fast transfer speeds between two work computers running Windows 10 Pro. They will be connected directly without a switch. No additional applications need to run on the NAS and no applications on the W10 machines need to access the NAS, (other than explorer for file transferring). Here's my reasoning/understanding for the components:
Fractal Meshify 2 XL
A server case will be too loud since it requires higher RPM from fans and I work right beside my computer. So a NAS build in a tower case is a compromise for noise. And this case can house 18 drives.
SSD
Higher quantities of small capacity drives seem to be cheaper than lower quantities of higher capacity drives since the two extra drives for parity will cost less at small capacities than higher. 12 x 1TB SATA SSDs vs 6 x 2TB SATA SSDs will result in a faster speed when striped with raid. HDDs will be slower. And NVMEs require special hardware for connectivity which runs the cost up too much for me.
Mellanox 100GBE NICS
100GBE is overkill and12 SSDs in Raid 6 should end up around 5,600MB/s or 5.6 GB/s. Realistically, I'll probably get half those speeds but incase it goes higher, a 50GBE NIC won't be the bottle neck and I can get the extra 600MB/s. Or higher if and when I expand the array later.
LSI 9211 8i
The Supermicro X11SPI-TF has 10 onboard sata connections, minus 1 when using an M.2, so the LSI card can give me more. I've read they need to be flashed into IT mode with an older firmware to work properly with TrueNAS. And they get very hot so I should use a PCI fan below to help keep it cool. And I saw on a thread here that splitting a raid array across onboard sata ports and LSI ports is fine.
Am I missing anything? Besides the LSI flashing, I'm hoping for a somewhat plug and play setup. So I wanted to ask if anyone sees any flaws with what I've spec'd out, and if there is any tricky setting up required to get the full bandwidth of SSDs vs just using HDDs. Is that CPU lackluster or overkill? Can I get away with less ram? Would a 50GBE NIC suffice if I can't expect the full 5.6gB/s speeds?
Thanks in advance.
CPU: Intel Xeon Silver 4210
CPU Cooler: Noctua NH-U12S DX 3647
Network: Mellanox MC456A-ECAT 100gbe
Sata Expansion: LSI 9211 8i
Ram: Crucial 2 x 16GB 2133 ECC RDIMM
OS SSD: Samsung 970 Evo Plus 250GB
PSU: Corsair SF450
Storage SSD: Samsung 870 Evo (12 x 1TB in raid 6)
Fans: Noctua NF-A14 140mm PWM
Case: Fractal Meshify 2 XL
I'm new to NAS and haven't messed around with a server OS before. I haven't purchased anything yet and don't have an existing NAS at the moment. The use case is strictly for storage and fast transfer speeds between two work computers running Windows 10 Pro. They will be connected directly without a switch. No additional applications need to run on the NAS and no applications on the W10 machines need to access the NAS, (other than explorer for file transferring). Here's my reasoning/understanding for the components:
Fractal Meshify 2 XL
A server case will be too loud since it requires higher RPM from fans and I work right beside my computer. So a NAS build in a tower case is a compromise for noise. And this case can house 18 drives.
SSD
Higher quantities of small capacity drives seem to be cheaper than lower quantities of higher capacity drives since the two extra drives for parity will cost less at small capacities than higher. 12 x 1TB SATA SSDs vs 6 x 2TB SATA SSDs will result in a faster speed when striped with raid. HDDs will be slower. And NVMEs require special hardware for connectivity which runs the cost up too much for me.
Mellanox 100GBE NICS
100GBE is overkill and12 SSDs in Raid 6 should end up around 5,600MB/s or 5.6 GB/s. Realistically, I'll probably get half those speeds but incase it goes higher, a 50GBE NIC won't be the bottle neck and I can get the extra 600MB/s. Or higher if and when I expand the array later.
LSI 9211 8i
The Supermicro X11SPI-TF has 10 onboard sata connections, minus 1 when using an M.2, so the LSI card can give me more. I've read they need to be flashed into IT mode with an older firmware to work properly with TrueNAS. And they get very hot so I should use a PCI fan below to help keep it cool. And I saw on a thread here that splitting a raid array across onboard sata ports and LSI ports is fine.
Am I missing anything? Besides the LSI flashing, I'm hoping for a somewhat plug and play setup. So I wanted to ask if anyone sees any flaws with what I've spec'd out, and if there is any tricky setting up required to get the full bandwidth of SSDs vs just using HDDs. Is that CPU lackluster or overkill? Can I get away with less ram? Would a 50GBE NIC suffice if I can't expect the full 5.6gB/s speeds?
Thanks in advance.