PCIe 4.0 enabled VMware iSCSI datastore server

xyzzy

Explorer
Joined
Jan 26, 2016
Messages
76
This TrueNAS CORE box is going to be used exclusively for VMware ESXi datastores via iSCSI (with sync writes enabled).

Supermicro H12SSL-NT
Epyc 7262
2 x DDR4-3200 64GB ECC DIMMs (fully populate with 8 eventually)
Noctua NH-U12S TR4-SP3 CPU cooler/fan
Seasonic Prime TX 750W power supply

Boot pool:
-- Intel S4610 SATA SSDs (240 GB) mirrored
Data pool #1 (new HW):
-- 2 Intel P5510 NVMe PCIe4 SSDs (3.84 TB) mirrored
-- 1 or 2 Intel P4801x NVMe PCIe3 SSDs (100 GB) (striped if more than 1) as SLOG
Data pool #2 (reusing existing HW):
-- 4 WD Gold 10 TB in RAID 10
-- 1 or 2 Intel S3710 SATA SSDs (200 GB) (striped if more than 1) as SLOG

NVMe SSDs will be connected to onboard SlimSAS x8 ports or Supermicro AOC-SLG4-4E4T (which uses PCIe 4.0 x16 slot). SATA disks will be connected to onboard SlimSAS x8 port (with breakout cable) or Broadcom 9305-16i.

I'm really unsure about the NICs. The P5510's have sequential specs of 6500 MB/s (read) and 3400 MB/s (write) but they won't hit that with ZFS. However, I think that pool will easily saturate a 10GbE connection. Should I go with a 25GbE NIC or try to get iSCSI multipath working with multiple 10GbE links? Or should I try a 40GbE NIC to support multiple NVMe pools like data pool #1 in the future?

The motherboard was chosen for PCIe 4.0 expansion (more NVMe drives) in the future.

The CPU was chosen as it's the cheapest Rome CPU that has full 8 channel memory bandwidth (although I'm unsure how much bandwidth really matters with TrueNAS).

Thanks in advance!
 

xyzzy

Explorer
Joined
Jan 26, 2016
Messages
76
Hi guys - any thoughts or suggestions on my proposed build, especially the NICs?
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
892
The tech in my primary box (runs FreeBSD) is ancient in comparison. I have an AMD Phenom II x6, a Sapphire Radeon Pulse RX 580, and a dual intel gig ethernet, all on an ASUS motherboard. Much of what you describe I personally have no experience with and likely would have no need of similar bandwidth. I cannot say anything about your proposed hardware aside from how I prefer AMD. Hopefully others will make some comments shortly or before too long.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
You would mirror your slog not stripe them.

use chelsios nics!
 

xyzzy

Explorer
Joined
Jan 26, 2016
Messages
76
You would mirror your slog not stripe them.

use chelsios nics!
On the slogs, I totally get that mirroring would be best from a safety standpoint but that scenario (enterprise SSD failing right after an unclean shutdown) is remote enough that I generally skip it. Instead, I'm thinking I *might* need to stripe the SLOG SSDs (PCIe Gen 3) to keep up with the data drives (PCIe Gen 4 SSDs). I figured I'd start with one and then see if another is needed.

I have no experience with Chelsio but know they're popular in this community. Do they interoperate well with Intel NICs?

Also, any thoughts on the NIC links? (One big link or use multipath with 2?)
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Oh wont convince me of no redundancy from such an important item. I just had the power supply combiner fail on one of my storage servers. If you can make it redundant….do it!
 

johnnyt

Cadet
Joined
Jan 23, 2021
Messages
6
This TrueNAS CORE box is going to be used exclusively for VMware ESXi datastores via iSCSI (with sync writes enabled).

Supermicro H12SSL-NT
Epyc 7262

Did you get your system working?

I'm looking at EPYC 7282 with H12SSL-C (1Gb NIC) because I have a 2.5Gb switch and read the 10Gb in the H12SSl-NT doesn't do 2.5Gb. (I have a 2.5Gb PCIe card I'll use).

Wanted to make sure this is supported by TrueNAS 12 CORE.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not recommended for iSCSI datastores. Just go with the 10G.
 
Top