50% throughput

Andy Alias

Cadet
Joined
Mar 27, 2024
Messages
4
I have done some tests to verify the speeds capable.

Pool Speed Test - TrueNAS - fio internal pool speed test results 1145MB/s (9600mbps)
Network Speed Test
- Debian Server to TrueNAS - iPerf3 1090MB/s (9340mbps)
Transfer Speed Test
- Debian Server to TrueNAS - fio (SMB cifs mount) 550MB/s (4400mbps)

The Debian Server is a VM on a separate machine running Proxmox connected to a 10g switch.
TrueNAS Scale runs on a separate machine connected to the same 10g switch.

The TrueNAS specs,
i5-9400, 16GB 2666 DDR4 ECC RAM
4x Samsung 4TB QVO SSD's
1x Integral 128GB NVMe boot drive

What can I check or do to get closer to 10G? Would adding a NVMe Cache drive help? or More RAM? or am I missing something?

Thanks in advance if anyone has any pointers
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's not likely that the QVOs can sustain their nominal performance for long. And that's not a high bar to begin with, you'd need two vdevs (which I guess you do have, unless you're using RAIDZ) to just barely clear the bar for 10 Gb/s. Since you didn't specify how you ran the benchmark, it's hard to comment beyond that. Hell, the bottleneck could be the client.
Would adding a NVMe Cache drive help?
The simple answer is "no".

or More RAM?
Not likely.
 

Andy Alias

Cadet
Joined
Mar 27, 2024
Messages
4
It's not likely that the QVOs can sustain their nominal performance for long. And that's not a high bar to begin with, you'd need two vdevs (which I guess you do have, unless you're using RAIDZ) to just barely clear the bar for 10 Gb/s. Since you didn't specify how you ran the benchmark, it's hard to comment beyond that. Hell, the bottleneck could be the client.

The simple answer is "no".


Not likely.
Hi thanks for the response, I’m currently away so I can’t get the details however, the internal fio tests ran for 60 seconds with a 10 second ramp up, that’s a 60GB file transfer. It’s the exact same settings used for the VM to NAS fio test.
 

Andy Alias

Cadet
Joined
Mar 27, 2024
Messages
4
Ok so here are the details. The pool is RaidZ1.

Pool test

Code:
fio --bs=128k --direct=1 --directory=/mnt/DataPool/Media/fio --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=randrw --numjobs=12 --ramp_time=10 --runtime=60 --rw=randrw --size=256M --time_based


Result
Code:
randrw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=32
...
fio-3.33
Starting 12 processes
Jobs: 12 (f=12): [m(12)][100.0%][r=1082MiB/s,w=1098MiB/s][r=8653,w=8783 IOPS][eta 00m:00s]       
randrw: (groupid=0, jobs=12): err= 0: pid=20421: Thu Mar 28 05:28:17 2024
  read: IOPS=8747, BW=1094MiB/s (1147MB/s)(64.1GiB/60043msec)
   bw (  MiB/s): min=  864, max= 1386, per=100.00%, avg=1095.12, stdev= 8.16, samples=1428
   iops        : min= 6913, max=11092, avg=8760.28, stdev=65.25, samples=1428
  write: IOPS=8744, BW=1093MiB/s (1147MB/s)(64.1GiB/60043msec); 0 zone resets
   bw (  MiB/s): min=  896, max= 1284, per=100.00%, avg=1094.39, stdev= 6.14, samples=1428
   iops        : min= 7171, max=10274, avg=8754.42, stdev=49.10, samples=1428
  cpu          : usr=0.81%, sys=0.15%, ctx=339218, majf=0, minf=437
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=19.3%, 16=56.1%, 32=24.3%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=97.1%, 8=0.3%, 16=0.3%, 32=2.4%, 64=0.0%, >=64=0.0%
     issued rwts: total=525254,525018,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=1094MiB/s (1147MB/s), 1094MiB/s-1094MiB/s (1147MB/s-1147MB/s), io=64.1GiB (68.9GB), run=60043-60043msec
  WRITE: bw=1093MiB/s (1147MB/s), 1093MiB/s-1093MiB/s (1147MB/s-1147MB/s), io=64.1GiB (68.8GB), run=60043-60043msec




Transfer Test - From Debian VM to TrueNAS

Code:
fio --bs=128k --direct=1 --directory=/mnt/Media/fio --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=randrw --numjobs=12 --ramp_time=10 --runtime=30 --rw=randrw --size=256M --time_based


Result
Code:
randrw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=32
...
fio-3.33
Starting 12 processes
Jobs: 12 (f=12): [m(12)][100.0%][r=600MiB/s,w=583MiB/s][r=4799,w=4662 IOPS][eta 00m:00s]      
randrw: (groupid=0, jobs=12): err= 0: pid=3706: Thu Mar 28 12:48:45 2024
  read: IOPS=4494, BW=563MiB/s (590MB/s)(16.6GiB/30123msec)
   bw (  KiB/s): min=384222, max=747774, per=100.00%, avg=576981.02, stdev=6294.54, samples=719
   iops        : min= 3000, max= 5841, avg=4506.83, stdev=49.19, samples=719
  write: IOPS=4484, BW=561MiB/s (588MB/s)(16.5GiB/30123msec); 0 zone resets
   bw (  KiB/s): min=428069, max=671744, per=100.00%, avg=575643.64, stdev=4613.86, samples=719
   iops        : min= 3343, max= 5248, avg=4496.37, stdev=36.06, samples=719
  cpu          : usr=0.22%, sys=0.15%, ctx=150790, majf=0, minf=437
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=25.0%, 16=50.0%, 32=25.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=97.5%, 8=0.0%, 16=0.0%, 32=2.5%, 64=0.0%, >=64=0.0%
     issued rwts: total=135400,135072,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=563MiB/s (590MB/s), 563MiB/s-563MiB/s (590MB/s-590MB/s), io=16.6GiB (17.8GB), run=30123-30123msec
  WRITE: bw=561MiB/s (588MB/s), 561MiB/s-561MiB/s (588MB/s-588MB/s), io=16.5GiB (17.7GB), run=30123-30123msec
 

Andy Alias

Cadet
Joined
Mar 27, 2024
Messages
4
I did some more testing checking the VM. I had it running 1 core 2GB RAM, I upped the cores to two which gave a bit more performance (See below), then to four which did not improve any further, then upped the RAM to 4GB which also didn't improve any.

Code:
...
fio-3.33
Starting 12 processes
Jobs: 12 (f=12): [m(12)][100.0%][r=590MiB/s,w=591MiB/s][r=4722,w=4729 IOPS][eta 00m:00s]
randrw: (groupid=0, jobs=12): err= 0: pid=2167: Fri Mar 29 00:02:06 2024
  read: IOPS=5222, BW=654MiB/s (685MB/s)(19.2GiB/30081msec)
   bw (  KiB/s): min=557312, max=777347, per=100.00%, avg=669891.82, stdev=3983.22, samples=720
   iops        : min= 4354, max= 6073, avg=5233.18, stdev=31.12, samples=720
  write: IOPS=5212, BW=652MiB/s (684MB/s)(19.2GiB/30081msec); 0 zone resets
   bw (  KiB/s): min=557568, max=778880, per=100.00%, avg=668632.85, stdev=3911.07, samples=720
   iops        : min= 4356, max= 6085, avg=5223.35, stdev=30.56, samples=720
  cpu          : usr=0.33%, sys=0.13%, ctx=90291, majf=0, minf=441
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=21.1%, 16=53.9%, 32=25.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=97.4%, 8=0.1%, 16=0.1%, 32=2.5%, 64=0.0%, >=64=0.0%
     issued rwts: total=157091,156794,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=654MiB/s (685MB/s), 654MiB/s-654MiB/s (685MB/s-685MB/s), io=19.2GiB (20.6GB), run=30081-30081msec
  WRITE: bw=652MiB/s (684MB/s), 652MiB/s-652MiB/s (684MB/s-684MB/s), io=19.2GiB (20.6GB), run=30081-30081msec
 
Top