Refill2630
Cadet
- Joined
- Jan 19, 2023
- Messages
- 4
Hello,
I am planning on building all SSD NAS and hope to achieve burst speeds up to 100Gbpe. My use case is serving 12 10Gbpe servers (boot drive as well as all the data). 12 servers will run XCP-ng and my SSD NAS will be the fast storage.
So far the components look like this:
Servers will have 10 or 25 Gbe connection.
What are your thoughts? Is there a huge issue or a bottleneck that I do not see?
I am planning on building all SSD NAS and hope to achieve burst speeds up to 100Gbpe. My use case is serving 12 10Gbpe servers (boot drive as well as all the data). 12 servers will run XCP-ng and my SSD NAS will be the fast storage.
So far the components look like this:
- Supermicro chassis with SAS3 backplane supporting 16 SFF drives - 4 Mini-SAS connectors.
- Supermicro H12SSL-i Motherboard
- AMD EPYC 7302 CPU - 16 Cores, 32 Threads, 3.0Ghz
- 256GB DDR4 ECC Registered Memory
- 8x 32GB PC4-2400T M39AA4K40BB1 - CRC0Q
- Mellanox MCX416A-CCAT CX416A Dual-Port ConnectX-4 100GbE PCIe Adapter NIC
- 2x WD Red SN700 NVMe 500GB on Motherboard in Raid0 (so if one dies the show goes on).
- 2x 9300-8i 12Gbps SAS cards
- PCIe to NVMe adapter for 2 NVMe drives for L2ARC cache
- 2x Kingston KC3000 NVMe 1TB for L2ARC
- 16x WD Blue SA510 SATA 1TB 2.5"
Servers will have 10 or 25 Gbe connection.
What are your thoughts? Is there a huge issue or a bottleneck that I do not see?