IOPS for both Read and Write Significantly Underperforming for NVMe

qtx

Cadet
Joined
Mar 13, 2020
Messages
6

Problem

I've run into a problem similar to those that other users have encountered; however, my setup appears to be unique and I'm not sure if I have something not configured right.

I have a 24 drive RAIDZ2 array with NVMe drives in 6 vdevs, considering this:
153k IOPS Seems very low considering each drive is rated at 245k IOPS sustained.

Any ideas?

Setup​

System:
TrueNAS-13.0-U6.1 (Fresh)
2x AMD EPYC 7232P 8-Core Processor
256 GB ECC RAM
GIGABYTE R272-Z32 (rev. A00/B00)


Pool Information:
6 vdevs, RAIDZ2, 24 NVME U.2 Drives
+ 24 x Seagate XP1920LE10002 1.92TB PCIe NVMe Solid State Drive
No compression, no synchronization enabled

Benchmark

1703107545838.png
 
Last edited:

qtx

Cadet
Joined
Mar 13, 2020
Messages
6

Observations​

  • I've tried creating pools with various different vdev configurations.
  • Every variation has nearly the exact same performance (almost like there is a system cap).
  • I am running these benchmarks using the web client.

Question​

Is it possible that there is a performance limiting setting established somewhere?
Running this test using the web client shell should not affect performance, correct?
 
Last edited:
Top