Write Speeds Equivalent to Etching Stone

loca5790

Dabbler
Joined
Oct 16, 2023
Messages
18
Hey All... New here but struggling.

TrueNas Scale setup
R730xd
E5-2690's
768 Gb DDR4
4 2TB NVMe's in Raidz1
6 2TB SSD's in Raidz2
3 10TB WD Golds in Raidz1
Intel NIC 10 GB hooked up to USW Pro24 10GB SFP+ port

4ktest: Laying out IO file (1 file / 4096MiB)
Jobs: 32 (f=32): [w(32)][100.0%][w=14.8MiB/s][w=3777 IOPS][eta 00m:00s]
4ktest: (groupid=0, jobs=32): err= 0: pid=1006209: Mon Oct 16 22:11:19 2023
write: IOPS=5605, BW=21.9MiB/s (22.0MB/s)(1314MiB/60009msec); 0 zone resets
clat (usec): min=13, max=14422, avg=5702.08, stdev=2053.11
lat (usec): min=13, max=14422, avg=5702.71, stdev=2053.22
clat percentiles (usec):
| 1.00th=[ 71], 5.00th=[ 186], 10.00th=[ 3785], 20.00th=[ 4359],
| 30.00th=[ 4817], 40.00th=[ 5407], 50.00th=[ 5932], 60.00th=[ 6521],
| 70.00th=[ 7046], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8455],
| 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9110], 99.95th=[ 9110],
| 99.99th=[ 9372]
bw ( KiB/s): min=14080, max=209456, per=100.00%, avg=22504.94, stdev=561.53, samples=3808
iops : min= 3520, max=52364, avg=5626.24, stdev=140.38, samples=3808
lat (usec) : 20=0.01%, 50=0.25%, 100=2.18%, 250=3.25%, 500=0.22%
lat (usec) : 750=0.11%, 1000=0.13%
lat (msec) : 2=0.59%, 4=6.76%, 10=86.51%, 20=0.01%
cpu : usr=0.19%, sys=1.51%, ctx=317906, majf=32, minf=829
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,336408,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=21.9MiB/s (22.0MB/s), 21.9MiB/s-21.9MiB/s (22.0MB/s-22.0MB/s), io=1314MiB (1378MB), run=60009-60009msec

command: fio --filename=test --direct=1 --rw=randrw --randrepeat=0 --rwmixread=0--iodepth=16 --numjobs=32 --runtime=60 --group_reporting --name=4ktest --size=4G --bs=4k
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Could you please summarize this incomprehensible wall of text? Or at least format it in a way that makes it readable? There are "CODE" tags available for structured command output.

Then: are these two pools or a single pool with two vdevs? Anyway you are aware that any RAIDZ has got the write bandwidth of a single disk or even worse?
 

loca5790

Dabbler
Joined
Oct 16, 2023
Messages
18
Updated:
TrueNas Scale setup
R730xd
E5-2690's
768 Gb DDR4
4 2TB NVMe's in Raidz1
6 2TB SSD's in Raidz2
3 10TB WD Golds in Raidz1
Intel NIC 10 GB hooked up to USW Pro24 10GB SFP+ port

4ktest: Laying out IO file (1 file / 4096MiB) Jobs: 32 (f=32): [w(32)][100.0%][w=14.8MiB/s][w=3777 IOPS][eta 00m:00s] 4ktest: (groupid=0, jobs=32): err= 0: pid=1006209: Mon Oct 16 22:11:19 2023 write: IOPS=5605, BW=21.9MiB/s (22.0MB/s)(1314MiB/60009msec); 0 zone resets clat (usec): min=13, max=14422, avg=5702.08, stdev=2053.11 lat (usec): min=13, max=14422, avg=5702.71, stdev=2053.22 clat percentiles (usec): | 1.00th=[ 71], 5.00th=[ 186], 10.00th=[ 3785], 20.00th=[ 4359], | 30.00th=[ 4817], 40.00th=[ 5407], 50.00th=[ 5932], 60.00th=[ 6521], | 70.00th=[ 7046], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8455], | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9110], 99.95th=[ 9110], | 99.99th=[ 9372] bw ( KiB/s): min=14080, max=209456, per=100.00%, avg=22504.94, stdev=561.53, samples=3808 iops : min= 3520, max=52364, avg=5626.24, stdev=140.38, samples=3808 lat (usec) : 20=0.01%, 50=0.25%, 100=2.18%, 250=3.25%, 500=0.22% lat (usec) : 750=0.11%, 1000=0.13% lat (msec) : 2=0.59%, 4=6.76%, 10=86.51%, 20=0.01% cpu : usr=0.19%, sys=1.51%, ctx=317906, majf=32, minf=829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,336408,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=21.9MiB/s (22.0MB/s), 21.9MiB/s-21.9MiB/s (22.0MB/s-22.0MB/s), io=1314MiB (1378MB), run=60009-60009msec

command: fio --filename=test --direct=1 --rw=randrw --randrepeat=0 --rwmixread=0--iodepth=16 --numjobs=32 --runtime=60 --group_reporting --name=4ktest --size=4G --bs=4k

It's three separate pools each drive type is in it's own pool.

NVME pool is dedicated to VM's right now
SSD pool is dedicated to the swap and generic everyday items
WD HDD pool is dedicated to my backups and plex media that I rarely access

Understand there are limitiations but 22 MiB/s.... is lower than any single disk in any of the pools
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Is your recordsize in the pool you are testing only 4k!? If default, you should be using bs=128k, the default recordsize for pools. I think you'll get much different results.
 

loca5790

Dabbler
Joined
Oct 16, 2023
Messages
18
I was trying all sorts of different record sizes.... I found them to have large impacts to results when running them in VM's and was getting what I expected within the VM's on the pools they were allocated to.

When I run it in the base truenas scale install it gave me a much different result. I'm leaning towards it being a network issue with starlink and my USW Pro as all network traffic is limited to a max throughput of near the same 22 MB/s.
 

loca5790

Dabbler
Joined
Oct 16, 2023
Messages
18
thanks sfatula!! You are right I was completely wrong I never did 128k only did 4k on the truenas. Thanks! Now I still have a network issue but can confirm it has nothing to do with truenas and the disks.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
thanks sfatula!! You are right I was completely wrong I never did 128k only did 4k on the truenas. Thanks! Now I still have a network issue but can confirm it has nothing to do with truenas and the disks.

Your speeds on the 4k were better than mine! Curious what it ended up on 128k? I ended up at 464 MiB/s, I think yours might be faster.
 

loca5790

Dabbler
Joined
Oct 16, 2023
Messages
18
Your speeds on the 4k were better than mine! Curious what it ended up on 128k? I ended up at 464 MiB/s, I think yours might be faster.
1697629342034.png


That's on my truenas scale drive.

1697629369369.png

NVME's in raidz

1697629400760.png

SSD's in 3x2way mirror
 
Top