Littlejd97
Cadet
- Joined
- Dec 10, 2021
- Messages
- 6
I was wondering if someone could help me understand if this is just expected performance, or if there is something misconfigured.
I have 2 4TB WD Red CMR (WD40EFRX) drives in a Mirror, and I would expect sequentially 100MB/s+ per drive, but I'm getting far less than that when running tests.
I disabled all services NFS/iSCSI, etc the ran these tests on the machine itself. I also set the `primarycache` to `metadata`
While running these tests, I checked `zpool iostat`
During the read test
And write test
These numbers just seem way too low.
I have 2 4TB WD Red CMR (WD40EFRX) drives in a Mirror, and I would expect sequentially 100MB/s+ per drive, but I'm getting far less than that when running tests.
I disabled all services NFS/iSCSI, etc the ran these tests on the machine itself. I also set the `primarycache` to `metadata`
Code:
root@truenas[/mnt/NAS]# fio --filename=/mnt/NAS/test.file --name=sync_seqread --rw=read --bs=4M --direct=1 --sync=1 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=1G --loops=10 --group_reporting sync_seqread: (g=0): rw=read, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1 fio-3.28 Starting 1 process sync_seqread: Laying out IO file (1 file / 1024MiB) sync_seqread: (groupid=0, jobs=1): err= 0: pid=8979: Wed Nov 2 20:22:40 2022 read: IOPS=22, BW=88.2MiB/s (92.5MB/s)(10.0GiB/116044msec) clat (usec): min=243, max=841757, avg=45324.58, stdev=33266.73 lat (usec): min=243, max=841758, avg=45325.38, stdev=33266.80 clat percentiles (usec): | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 441], 20.00th=[ 41681], | 30.00th=[ 43779], 40.00th=[ 45351], 50.00th=[ 46924], 60.00th=[ 47973], | 70.00th=[ 49546], 80.00th=[ 51643], 90.00th=[ 54789], 95.00th=[ 65274], | 99.00th=[175113], 99.50th=[270533], 99.90th=[304088], 99.95th=[438305], | 99.99th=[843056] bw ( KiB/s): min=15937, max=2096956, per=100.00%, avg=91020.76, stdev=141078.11, samples=226 iops : min= 3, max= 511, avg=21.61, stdev=34.43, samples=226 lat (usec) : 250=0.23%, 500=12.58%, 750=0.39% lat (msec) : 50=60.59%, 100=24.41%, 250=1.21%, 500=0.55%, 1000=0.04% cpu : usr=0.01%, sys=0.96%, ctx=8935, majf=0, minf=1024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=10.0GiB (10.7GB), run=116044-116044msec root@truenas[/mnt/NAS]#
Code:
root@truenas[/mnt/NAS]# fio --filename=/mnt/NAS/test.file --name=sync_seqwrite --rw=write --bs=4M --direct=1 --sync=1 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=5G --loops=1 --group_reporting | tee -a "${LOGFILE}" sync_seqwrite: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1 fio-3.28 Starting 1 process sync_seqwrite: Laying out IO file (1 file / 5120MiB) sync_seqwrite: (groupid=0, jobs=1): err= 0: pid=8655: Wed Nov 2 20:19:21 2022 write: IOPS=8, BW=34.8MiB/s (36.5MB/s)(5120MiB/147112msec); 0 zone resets clat (msec): min=21, max=281, avg=114.41, stdev=35.53 lat (msec): min=21, max=281, avg=114.41, stdev=35.53 clat percentiles (msec): | 1.00th=[ 78], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 92], | 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 107], | 70.00th=[ 111], 80.00th=[ 126], 90.00th=[ 169], 95.00th=[ 203], | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 275], 99.95th=[ 284], | 99.99th=[ 284] bw ( KiB/s): min=15603, max=49053, per=100.00%, avg=35672.08, stdev=8180.57, samples=285 iops : min= 3, max= 11, avg= 7.97, stdev= 2.05, samples=285 lat (msec) : 50=0.39%, 100=46.72%, 250=52.27%, 500=0.62% cpu : usr=0.47%, sys=0.45%, ctx=2671, majf=0, minf=0 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=5120MiB (5369MB), run=147112-147112msec root@truenas[/mnt/NAS]
While running these tests, I checked `zpool iostat`
During the read test
Code:
capacity operations bandwidth pool alloc free read write read write ---------------------------------------------- ----- ----- ----- ----- ----- ----- NAS 2.68T 972G 655 0 81.9M 0 mirror-0 2.68T 972G 655 0 81.9M 0 gptid/afc545f7-75ad-11eb-9241-f9ace32b146a - - 329 0 41.2M 0 gptid/aff57292-75ad-11eb-9241-f9ace32b146a - - 325 0 40.7M 0 ---------------------------------------------- ----- ----- ----- ----- ----- -----
And write test
Code:
capacity operations bandwidth pool alloc free read write read write ---------------------------------------------- ----- ----- ----- ----- ----- ----- NAS 2.68T 969G 0 364 0 71.4M mirror-0 2.68T 969G 0 364 0 71.4M gptid/afc545f7-75ad-11eb-9241-f9ace32b146a - - 0 182 0 35.7M gptid/aff57292-75ad-11eb-9241-f9ace32b146a - - 0 182 0 35.7M ---------------------------------------------- ----- ----- ----- ----- ----- -----
These numbers just seem way too low.