New Core build advise needed

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
HBA vs Motherboards shouldn't make a lot of difference.
Optane disks are easy to get ahold off - depends if you can find the format you wanted.
M.2 - see newegg: NewEgg - don't be tempted by the smaller one. U.2 can be had on aliexpress / ebay
Thanks. Are they available in M.2 SATA, since I need to mount them in a SDP11 adapter?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Optane is NVMe only.
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
Just in case anyone is following this build at home, here are some initial benchmarks on the SA500 'Red' drives. Same 32G test setup, over a mix of NFS, iSCSI targets, on a 10G vlan, dedicated to storage traffic with sync on and sync off to compare - and I even spun up a Windows VM just for amusement really. I did try a WD SN700 pair of drives as a SLOG and the performance did improve a small amount, and I will test sync write with the Optane drives soon. This was just to have some numbers to gauge VM performance metrics as I wait on the rest of the components. I think the p1600x will serve a good role as the SLOG mirror. It doesn't help much, but every little bit helps keep the VM's happy, and I like having Sync on for peace of mind.

These are runs of fio, with 4k blocksize, random I/O (since I find more iops to serve my use case better) in an Ubuntu 22.04 VM with 2 cores and 4G of RAM, nothing fancy.

My 'control' using local nvme storage on the host:

Local SR storage baseline (LXD2) read: IOPS=106k, BW=413MiB/s (433MB/s)(2467MiB/5977msec) write: IOPS=97.7k, BW=382MiB/s (400MB/s)(3025MiB/7927msec); 0 zone resets


Some random tests, the pool is about 50% capacity with a handful of test VM's mainly, some ISOs, and data files to test with.
===================== NFS benchmarks =========================== All tests with 4k block size unless noted Sync always read: IOPS=69.0k, BW=270MiB/s (283MB/s)(3210MiB/11902msec) write: IOPS=5392, BW=21.1MiB/s (22.1MB/s)(4030MiB/191268msec); 0 zone resets Sync standard (NFS 32k) read: IOPS=72.7k, BW=284MiB/s (298MB/s)(2960MiB/10417msec) write: IOPS=1795, BW=56.1MiB/s (58.9MB/s)(3755MiB/66881msec); 0 zone resets Sync standard 4k read: IOPS=68.1k, BW=266MiB/s (279MB/s)(3100MiB/11643msec) write: IOPS=4705, BW=18.4MiB/s (19.3MB/s)(3994MiB/217259msec); 0 zone resets Sync standard 4k (Debian) read: IOPS=71.8k, BW=280MiB/s (294MB/s)(3272MiB/11668msec) write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(4031MiB/262747msec); 0 zone resets bs=32k (Debian VM) read: IOPS=15.8k, BW=493MiB/s (517MB/s)(2138MiB/4340msec) write: IOPS=1673, BW=52.3MiB/s (54.9MB/s)(3783MiB/72291msec); 0 zone resets

Sequential test for reference
Sync on read: IOPS=142, BW=657MiB/s (689MB/s)(1948MiB/2966msec) write: IOPS=54, BW=234MiB/s (246MB/s)(3480MiB/14841msec); 0 zone resets

-------------- SLOG Tests (WD SN700 500G mirror 512bs) -------------------------- Sync: always read: IOPS=283k, BW=1107MiB/s (1161MB/s)(2626MiB/2371msec) write: IOPS=6606, BW=25.8MiB/s (27.1MB/s)(3989MiB/154574msec); 0 zone resets


Test iSCSI target

4k tests (zvol 512b bs) Sync: disabled read: IOPS=70.7k, BW=276MiB/s (290MB/s)(3001MiB/10862msec) write: IOPS=62.9k, BW=246MiB/s (258MB/s)(2733MiB/11124msec); 0 zone resets Sync: always read: IOPS=76.2k, BW=298MiB/s (312MB/s)(2916MiB/9793msec) write: IOPS=6816, BW=26.6MiB/s (27.9MB/s)(3942MiB/148032msec); 0 zone resets Sync: Standard read: IOPS=81.5k, BW=318MiB/s (334MB/s)(2965MiB/9311msec) write: IOPS=39.6k, BW=155MiB/s (162MB/s)(3051MiB/19723msec); 0 zone resets

An iperf3 test to link to the NAS

This is with a Chelsio 2-port NIC in the NAS
iperf3 Connecting to host x.x.x.x, port 7575 [ 5] local x.x.x.x port 54460 connected to x.x.x.x port 7575 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.18 GBytes 10.1 Gbits/sec 1581 503 KBytes [ 5] 1.00-2.00 sec 1.18 GBytes 10.1 Gbits/sec 1535 461 KBytes [ 5] 2.00-3.00 sec 1.23 GBytes 10.6 Gbits/sec 1655 636 KBytes [ 5] 3.00-4.00 sec 1.19 GBytes 10.2 Gbits/sec 1518 484 KBytes [ 5] 4.00-5.00 sec 1.23 GBytes 10.5 Gbits/sec 1673 491 KBytes [ 5] 5.00-6.00 sec 1.19 GBytes 10.3 Gbits/sec 1604 656 KBytes [ 5] 6.00-7.00 sec 1.19 GBytes 10.2 Gbits/sec 1584 898 KBytes [ 5] 7.00-8.00 sec 1.12 GBytes 9.62 Gbits/sec 1436 741 KBytes [ 5] 8.00-9.00 sec 1.14 GBytes 9.83 Gbits/sec 1177 486 KBytes [ 5] 9.00-10.00 sec 1.23 GBytes 10.5 Gbits/sec 1342 660 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 11.9 GBytes 10.2 Gbits/sec 15105 sender [ 5] 0.00-10.04 sec 11.9 GBytes 10.2 Gbits/sec receiver

Thanks for all the help from everyone that schooled me so far. I think I have a design that will work for me at least.
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
I think the Optane drives were a wise choice for SLOG.

Using the Optane P1600x drives in a mirrored SLOG - fio test from within a VM, over NFS:
Sync: standard 4k read: IOPS=69.0k, BW=273MiB/s (287MB/s)(3024MiB/11065msec) write: IOPS=23.7k, BW=92.6MiB/s (97.1MB/s)(3429MiB/37044msec); 0 zone resets

The previous benchmark with no SLOG:
read: IOPS=69.0k, BW=270MiB/s (283MB/s)(3210MiB/11902msec) write: IOPS=5392, BW=21.1MiB/s (22.1MB/s)(4030MiB/191268msec); 0 zone resets

For giggles I ran a similar fio test on the pool from the NAS host itself, and here is what it reports:

read: IOPS=139k, BW=542MiB/s (569MB/s)(2107MiB/3886msec) write: IOPS=68.6k, BW=268MiB/s (281MB/s)(2964MiB/11055msec); 0 zone resets

Command used: fio --randrepeat=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite --ramp_time=4

To improve IOPs, (using sync writes) I gather I need to add more mirrored Vdevs, and possibly another 32GB more RAM would do well for this setup. I do see slightly better performance on iSCSI zvols, but only with sync set to standard, which appears to be async writes, since it slows down when I set sync to always, and I see nothing but zeros with zilstat when sync is set to standard. So sticking to NFS, with sync enabled. Not as fast, but I like data integrity over blazing file transfer speed. My goal is to get IOPs as close to local storage without compromising the data. Getting closer!
 
Top