Hey All,
I have a pool that I'm using for VMs through NFS (better than ISCSI for me in this case). The pool is mounted to a 3 host XCP-ng pool via a 40Gbps Mellanox Ethernet connection. I've been able to iperf3 around 25-30Gbps which seems within reason for the connection, there is a lot of overhead and I start getting CPU bound with iperf.
Problem: I'm getting very slow fio speeds on my Ubuntu and Debian VMs, only 3 currently running VMs that have very little activity.
TrueNAS Server:
TrueNAS 12.0 U2.1
CPU:AMD Epyc 7282
Motherboard: ASrock Rack ROMED8-2T
Memory: 128G
Mellanox 40G SFP (ConnectX-3)
4x 6.4T PCIE3 NVME U.2 Intel P4610 ( 2x vdev Mirror Stripe)
1x Intel Optane M.2 380GB SLOG (Tested with and without)
Pool: Sync Disabled, Compression LZ4, Dedup Disabled, Atime Disabled
FIO Test:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=read
r=80.9k IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=write
w=94.6k IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4
r=444MiB/s
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=write --ramp_time=4
w=1736MiB/s
I deleted and rebuild the pool several times and each of these drives vastly out performs the pool when it's created. I was hoping to get a little more performance out of this setup. Please let me know if there are any logs or additional tests that can be done to help optimize or speed up performance of this pool.
I have an ssd-pool without a SLOG that is performing almost better than the nvme pool...
r=78.3k IOPS
w=94.5k IOPS
r=444MiB/s
w=729MiB/s
I have a pool that I'm using for VMs through NFS (better than ISCSI for me in this case). The pool is mounted to a 3 host XCP-ng pool via a 40Gbps Mellanox Ethernet connection. I've been able to iperf3 around 25-30Gbps which seems within reason for the connection, there is a lot of overhead and I start getting CPU bound with iperf.
Problem: I'm getting very slow fio speeds on my Ubuntu and Debian VMs, only 3 currently running VMs that have very little activity.
TrueNAS Server:
TrueNAS 12.0 U2.1
CPU:AMD Epyc 7282
Motherboard: ASrock Rack ROMED8-2T
Memory: 128G
Mellanox 40G SFP (ConnectX-3)
4x 6.4T PCIE3 NVME U.2 Intel P4610 ( 2x vdev Mirror Stripe)
1x Intel Optane M.2 380GB SLOG (Tested with and without)
Pool: Sync Disabled, Compression LZ4, Dedup Disabled, Atime Disabled
FIO Test:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=read
r=80.9k IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=write
w=94.6k IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4
r=444MiB/s
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=write --ramp_time=4
w=1736MiB/s
I deleted and rebuild the pool several times and each of these drives vastly out performs the pool when it's created. I was hoping to get a little more performance out of this setup. Please let me know if there are any logs or additional tests that can be done to help optimize or speed up performance of this pool.
I have an ssd-pool without a SLOG that is performing almost better than the nvme pool...
r=78.3k IOPS
w=94.5k IOPS
r=444MiB/s
w=729MiB/s
Last edited: