Realtek RTL8125B 2.5Gbe NIC Slow Throughput

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
Don't know if i have the correct area or not but has anyone had any issues where their RTL8125B based 2.5Gbe NIC running alot slower then 2.5Gbps in their TrueNAS scale systems? Below are my iperf results from my TrueNAS scale 22.12.2 box direct to my Windows 11 PC with a USB 3 to 2.5Gbps ethernet adapter.

Code:
iperf3 -c 192.168.225.250
Connecting to host 192.168.225.250, port 5201
[  4] local 192.168.225.55 port 57106 connected to 192.168.225.250 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.02   sec   171 MBytes  1.42 Gbits/sec
[  4]   1.02-2.00   sec   170 MBytes  1.45 Gbits/sec
[  4]   2.00-3.00   sec   173 MBytes  1.45 Gbits/sec
[  4]   3.00-4.00   sec   179 MBytes  1.50 Gbits/sec
[  4]   4.00-5.00   sec   178 MBytes  1.50 Gbits/sec
[  4]   5.00-6.00   sec   179 MBytes  1.50 Gbits/sec
[  4]   6.00-7.00   sec   179 MBytes  1.50 Gbits/sec
[  4]   7.00-8.00   sec   172 MBytes  1.45 Gbits/sec
[  4]   8.00-9.00   sec   180 MBytes  1.51 Gbits/sec
[  4]   9.00-10.00  sec   174 MBytes  1.46 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.71 GBytes  1.47 Gbits/sec                  sender
[  4]   0.00-10.00  sec  1.71 GBytes  1.47 Gbits/sec                  receiver


iperf Done.


The above results are when using the latest RTL8125 driver from Realtek (9.011.01) which i installed manually on the TrueNAS scale system. The performance was even worse with the default R8169 driver that TrueNAS scale uses for the NIC hence why i tried the other RTL8125 driver.

I am also using the latest Windows drivers from Realtek for my USB ethernet adaptor.

I am not sure if it's simply my old system i am using for TrueNAS scale simply can't handle the speed and is the cause of the issue or if there is something else going on here. I am thinking of testing UNRAID to see if i have the same performance issues but wanted to ask here if anyone had similar issues.

My TrueNAS scale system has an Intel Core i7 870 CPU, 16Gb DDR3 RAM, 2x Seagate Ironwolf 4Tb (ST4000VN006) mirrored.

When i was running iperf tests the CPU on TrueNAS Scale system didn't seem to be breaking a sweat (14% max utilization) so the only things i can think of here is that there is a driver and/or configuration issue or the PCIe and SATA system on my old machine cannot handle 2.5GBe ethernet.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Firstly, you should know if you've read anything much in the forum that Realtek drivers are problematic no matter what. https://www.truenas.com/community/resources/is-my-realtek-ethernet-really-that-bad.196/

you may see different results with iperf if you multithread. (add -P 8 to your command, for example... makes sense up to the core count of your CPU).

Then, if you can get an idea of how fast your network can go, test your disks with fio to understand if they can at least match the network.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
That's surprisingly good performance for realtek NICs.
The USB adapter might be a factor as well depending on the port type.
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
Thanks for the replies. Just gave UNRAID a quick go and iperf results were very similar so nothing to do with TrueNAS Scale.

I was tempted to upgrade the CPU, motherboard and RAM to an Intel 13th gen platform and reinstall TrueNAS Scale (as i am due for an upgrade anyway) on the off chance it's being caused by the old hardware. Probably not worth it when the actual issue is the Realtek drivers as suggested.

Btw the USB adaptor is definitely operating at USB 3 speeds according to HWINFO.

I will have to settle for it is what it is at the moment, atleast the performance is abit better then 1Gbe :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I was tempted to upgrade the CPU, motherboard and RAM to an Intel 13th gen platform and reinstall TrueNAS Scale (as i am due for an upgrade anyway) on the off chance it's being caused by the old hardware. Probably not worth it when the actual issue is the Realtek drivers as suggested.

My take? You REALLY do not want to do this. The 13th gen schlock contains "E-cores" which could more charitably be named "half-arse cores" and these are not (yet) well-supported by FreeBSD or Linux. If you want a newer platform, please consider an 11th gen. And while you're at it, if you really want 2.5GbE, Intel does make a card, but note that it has been problematic for some users. If you can, hold off for awhile. Due to the need to redesign schedulers to accommodate the E-core/P-core crap, it is unclear just how soon solid and useful support for these newest CPU's will appear in FreeBSD and Linux.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I was tempted to upgrade the CPU, motherboard and RAM to an Intel 13th gen platform and reinstall TrueNAS Scale (as i am due for an upgrade anyway) on the off chance it's being caused by the old hardware. Probably not worth it when the actual issue is the Realtek drivers as suggested.
…but a great way to stumble into other issues, as the hybrid architecture in 12/13th gen. is currently NOT supported by the scheduler.

If you upgrade the hardware, settle for an intermediate between your current 870 and the latest and shiniest bleeding edge.
For better than 1 GbE performance, Solarflare NICs can be found for $50 on eBay. 10 GbE with good drivers in TrueNAS.
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
Then, if you can get an idea of how fast your network can go, test your disks with fio to understand if they can at least match the network.
Excuse my ignorance, are you able to advise of a fio command for me to run to confirm my HDDs can keep up? I haven't used the command before. I searched the forums and there is multiple different variations of the command to use mentioned so wasn't sure what was the best to run in my instance. As mentioned i have 2x4Tb Seagate Ironwolf HDD's that are mirrored in one vDev.
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
My take? You REALLY do not want to do this. The 13th gen schlock contains "E-cores" which could more charitably be named "half-arse cores" and these are not (yet) well-supported by FreeBSD or Linux. If you want a newer platform, please consider an 11th gen. And while you're at it, if you really want 2.5GbE, Intel does make a card, but note that it has been problematic for some users. If you can, hold off for awhile. Due to the need to redesign schedulers to accommodate the E-core/P-core crap, it is unclear just how soon solid and useful support for these newest CPU's will appear in FreeBSD and Linux.
Cheers, i will steer clear of this.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
Thanks, i should also mention my boot pool is on an SSD (256Gb Size but i have partitioned it so TrueNAS only has a 16Gb boot pool on it and the rest i have available as a separate vdev/pool), not sure if the below results were from my 2x4Tb pool or from the SSD as they seemed fast and also explain the out of space message, results were:

Code:
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=50g --io_size=1500g --blocksize=128k --iodepth=16 --direct=1 --numjobs=16 --runtime=120 --group_reporting
TEST: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=16
...
fio-3.25
Starting 16 processes
TEST: Laying out IO file (1 file / 51200MiB)
fio: ENOSPC on laying out file, stopping
fio: pid=0, err=28/file:filesetup.c:235, func=write, error=No space left on device
Jobs: 15 (f=15): [X(1),R(15)][5.8%][r=4044MiB/s][r=32.4k IOPS][eta 01m:54s]
Jobs: 15 (f=15): [X(1),R(15)][10.7%][r=4038MiB/s][r=32.3k IOPS][eta 01m:48s]
Jobs: 15 (f=15): [X(1),R(15)][14.9%][r=9104MiB/s][r=72.8k IOPS][eta 01m:43s]
Jobs: 15 (f=15): [X(1),R(15)][19.8%][r=9380MiB/s][r=75.0k IOPS][eta 01m:37s]
Jobs: 15 (f=15): [X(1),R(15)][24.8%][r=8861MiB/s][r=70.9k IOPS][eta 01m:31s]
Jobs: 15 (f=15): [X(1),R(15)][29.8%][r=7911MiB/s][r=63.3k IOPS][eta 01m:25s]
Jobs: 15 (f=15): [X(1),R(15)][33.9%][r=7686MiB/s][r=61.5k IOPS][eta 01m:20s]
Jobs: 15 (f=15): [X(1),R(15)][38.8%][r=7528MiB/s][r=60.2k IOPS][eta 01m:14s]
Jobs: 15 (f=15): [X(1),R(15)][43.0%][r=7763MiB/s][r=62.1k IOPS][eta 01m:09s]
Jobs: 15 (f=15): [X(1),R(15)][47.9%][r=6855MiB/s][r=54.8k IOPS][eta 01m:03s]
Jobs: 15 (f=15): [X(1),R(15)][52.1%][r=7446MiB/s][r=59.6k IOPS][eta 00m:58s]
Jobs: 15 (f=15): [X(1),R(15)][57.0%][r=7204MiB/s][r=57.6k IOPS][eta 00m:52s]
Jobs: 15 (f=15): [X(1),R(15)][62.0%][r=7101MiB/s][r=56.8k IOPS][eta 00m:46s]
Jobs: 15 (f=15): [X(1),R(15)][66.9%][r=6858MiB/s][r=54.9k IOPS][eta 00m:40s]
Jobs: 15 (f=15): [X(1),R(15)][71.1%][r=7126MiB/s][r=57.0k IOPS][eta 00m:35s]
Jobs: 15 (f=15): [X(1),R(15)][75.2%][r=6992MiB/s][r=55.9k IOPS][eta 00m:30s]
Jobs: 15 (f=15): [X(1),R(15)][80.2%][r=7203MiB/s][r=57.6k IOPS][eta 00m:24s]
Jobs: 15 (f=15): [X(1),R(15)][84.3%][r=6872MiB/s][r=54.0k IOPS][eta 00m:19s]
Jobs: 15 (f=15): [X(1),R(15)][88.4%][r=6256MiB/s][r=50.0k IOPS][eta 00m:14s]
Jobs: 15 (f=15): [X(1),R(15)][93.4%][r=5261MiB/s][r=42.1k IOPS][eta 00m:08s]
Jobs: 15 (f=15): [X(1),R(15)][98.3%][r=4547MiB/s][r=36.4k IOPS][eta 00m:02s]
Jobs: 15 (f=15): [X(1),R(15)][100.0%][r=4315MiB/s][r=34.5k IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=16): err=28 (file:filesetup.c:235, func=write, error=No space left on device): pid=0: Tue May 23 23:09:08 2023
  read: IOPS=54.0k, BW=6874MiB/s (7208MB/s)(806GiB/120003msec)
    clat (usec): min=13, max=116234, avg=267.86, stdev=1442.69
     lat (usec): min=13, max=116234, avg=268.35, stdev=1443.95
    clat percentiles (usec):
     |  1.00th=[   50],  5.00th=[   74], 10.00th=[   88], 20.00th=[   99],
     | 30.00th=[  115], 40.00th=[  120], 50.00th=[  127], 60.00th=[  143],
     | 70.00th=[  147], 80.00th=[  153], 90.00th=[  212], 95.00th=[  449],
     | 99.00th=[ 1401], 99.50th=[10421], 99.90th=[23987], 99.95th=[27919],
     | 99.99th=[40109]
   bw (  MiB/s): min= 2763, max=13819, per=100.00%, avg=6887.29, stdev=142.63, samples=3585
   iops        : min=22108, max=110552, avg=55097.07, stdev=1141.02, samples=3585
  lat (usec)   : 20=0.02%, 50=1.03%, 100=20.84%, 250=69.65%, 500=4.74%
  lat (usec)   : 750=2.35%, 1000=0.23%
  lat (msec)   : 2=0.22%, 4=0.13%, 10=0.26%, 20=0.38%, 50=0.15%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=1.71%, sys=43.33%, ctx=690394, majf=20, minf=698
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=6599657,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16


Run status group 0 (all jobs):
   READ: bw=6874MiB/s (7208MB/s), 6874MiB/s-6874MiB/s (7208MB/s-7208MB/s), io=806GiB (865GB), run=120003-120003msec
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Seems to me like that's measuring a boot device that's probably an SSD... did you cd into the directory in the pool you wanted to test first?
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
Seems to me like that's measuring a boot device that's probably an SSD... did you cd into the directory in the pool you wanted to test first?
Sorry you were right. Below are the results.

Code:
fio --name TEST --eta-newline=5s --filename=/mnt/NASPool_Main/ROOT_DATA/fio-tempfile.dat --rw=read --size=50g --io_size=1500g --blocksize=128k --iodepth=16 --direct=1 --numjobs=16 --runtime=120 --group_reporting
TEST: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=16
...
fio-3.25
Starting 16 processes
TEST: Laying out IO file (1 file / 51200MiB)
Jobs: 16 (f=16): [R(16)][6.6%][r=1946MiB/s][r=15.6k IOPS][eta 01m:53s]
Jobs: 16 (f=16): [R(16)][11.6%][r=966MiB/s][r=7728 IOPS][eta 01m:47s] 
Jobs: 16 (f=16): [R(16)][16.5%][r=1153MiB/s][r=9226 IOPS][eta 01m:41s] 
Jobs: 16 (f=16): [R(16)][21.5%][r=1474MiB/s][r=11.8k IOPS][eta 01m:35s]
Jobs: 16 (f=16): [R(16)][26.4%][r=1449MiB/s][r=11.6k IOPS][eta 01m:29s]
Jobs: 16 (f=16): [R(16)][31.4%][r=1974MiB/s][r=15.8k IOPS][eta 01m:23s]
Jobs: 16 (f=16): [R(16)][36.4%][r=2196MiB/s][r=17.6k IOPS][eta 01m:17s]
Jobs: 16 (f=16): [R(16)][41.3%][r=1411MiB/s][r=11.3k IOPS][eta 01m:11s]
Jobs: 16 (f=16): [R(16)][46.3%][r=1423MiB/s][r=11.4k IOPS][eta 01m:05s]
Jobs: 16 (f=16): [R(16)][51.2%][r=1375MiB/s][r=11.0k IOPS][eta 00m:59s]
Jobs: 16 (f=16): [R(16)][56.2%][r=1152MiB/s][r=9216 IOPS][eta 00m:53s] 
Jobs: 16 (f=16): [R(16)][61.2%][r=1624MiB/s][r=12.0k IOPS][eta 00m:47s]
Jobs: 16 (f=16): [R(16)][66.1%][r=1328MiB/s][r=10.6k IOPS][eta 00m:41s]
Jobs: 16 (f=16): [R(16)][71.1%][r=1910MiB/s][r=15.3k IOPS][eta 00m:35s]
Jobs: 16 (f=16): [R(16)][76.0%][r=2310MiB/s][r=18.5k IOPS][eta 00m:29s]
Jobs: 16 (f=16): [R(16)][81.0%][r=1097MiB/s][r=8776 IOPS][eta 00m:23s] 
Jobs: 16 (f=16): [R(16)][86.0%][r=1768MiB/s][r=14.1k IOPS][eta 00m:17s]
Jobs: 16 (f=16): [R(16)][90.9%][r=2325MiB/s][r=18.6k IOPS][eta 00m:11s]
Jobs: 16 (f=16): [R(16)][95.9%][r=3326MiB/s][r=26.6k IOPS][eta 00m:05s]
Jobs: 16 (f=16): [R(16)][100.0%][r=4480MiB/s][r=35.8k IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=16): err= 0: pid=355647: Tue May 23 23:29:15 2023
  read: IOPS=13.9k, BW=1732MiB/s (1816MB/s)(203GiB/120003msec)
    clat (usec): min=13, max=320655, avg=1151.06, stdev=6909.25
     lat (usec): min=13, max=320655, avg=1151.40, stdev=6909.25
    clat percentiles (usec):
     |  1.00th=[    24],  5.00th=[    39], 10.00th=[    46], 20.00th=[    59],
     | 30.00th=[    64], 40.00th=[    68], 50.00th=[    72], 60.00th=[    79],
     | 70.00th=[   155], 80.00th=[   247], 90.00th=[   627], 95.00th=[  8586],
     | 99.00th=[ 22152], 99.50th=[ 36963], 99.90th=[101188], 99.95th=[137364],
     | 99.99th=[227541]
   bw (  MiB/s): min=  228, max= 4788, per=100.00%, avg=1732.95, stdev=51.57, samples=3840
   iops        : min= 1824, max=38304, avg=13861.47, stdev=412.59, samples=3840
  lat (usec)   : 20=0.52%, 50=12.51%, 100=53.40%, 250=13.90%, 500=7.84%
  lat (usec)   : 750=2.97%, 1000=1.09%
  lat (msec)   : 2=0.94%, 4=0.48%, 10=4.16%, 20=1.13%, 50=0.82%
  lat (msec)   : 100=0.14%, 250=0.09%, 500=0.01%
  cpu          : usr=0.51%, sys=6.19%, ctx=475013, majf=0, minf=737
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1662704,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16


Run status group 0 (all jobs):
   READ: bw=1732MiB/s (1816MB/s), 1732MiB/s-1732MiB/s (1816MB/s-1816MB/s), io=203GiB (218GB), run=120003-120003msec
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You could set numjobs and iodepth to 1 to get something a bit more real-world for a single file copy.
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
Will give it a try, now that i made a rookie error writing that 50Gb file to my boot-pool initially which is 16Gb i am now getting critical warning my boot-pool is full which it is (100%) utilization. trying to work out how to mount it so i can delete the temp fio file, not having much luck after multiple forum/google searches so far :-(
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Try restarting in your previous boot environment and see if you can get in.

Otherwise, if you have a recent enough config backup, just reinstall and restore that.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Apologies for my part in that... the post a few up in that same thread I sent you to has the critical instruction you needed, but was a write test, so I sent you to the read test a few posts below it instead (which was missing that instruction).
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
All good mate, reinstall and recovery from backup config file only took me 5 odd minutes and i'm back in business now. Below are the results from fio command with the adjustments you mentioned, It would appear my disks and/or sata controller is the cause of not getting full 2.5gbe bandwidth on the NIC? as these speeds seem to correlate roughly with the SMB transfer rates from NAS to my PC. Unless i am reading it completely wrong?

Code:
fio --name TEST --eta-newline=5s --filename=/mnt/NASPool_Main/ROOT_DATA/fio-tempfile.dat --rw=read --size=50g --io_size=1500g --blocksize=128k --iodepth=1 --direct=1 --numjobs=1 --runtime=120 --group_reporting
TEST: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=1
fio-3.25
Starting 1 process
TEST: Laying out IO file (1 file / 51200MiB)
Jobs: 1 (f=1): [R(1)][6.6%][r=98.0MiB/s][r=791 IOPS][eta 01m:53s]
Jobs: 1 (f=1): [R(1)][11.6%][r=173MiB/s][r=1385 IOPS][eta 01m:47s]
Jobs: 1 (f=1): [R(1)][16.5%][r=120MiB/s][r=962 IOPS][eta 01m:41s] 
Jobs: 1 (f=1): [R(1)][21.5%][r=116MiB/s][r=931 IOPS][eta 01m:35s] 
Jobs: 1 (f=1): [R(1)][26.4%][r=126MiB/s][r=1006 IOPS][eta 01m:29s]
Jobs: 1 (f=1): [R(1)][31.4%][r=85.0MiB/s][r=680 IOPS][eta 01m:23s]
Jobs: 1 (f=1): [R(1)][36.4%][r=157MiB/s][r=1252 IOPS][eta 01m:17s]
Jobs: 1 (f=1): [R(1)][41.3%][r=140MiB/s][r=1117 IOPS][eta 01m:11s]
Jobs: 1 (f=1): [R(1)][46.3%][r=110MiB/s][r=880 IOPS][eta 01m:05s]
Jobs: 1 (f=1): [R(1)][51.2%][r=135MiB/s][r=1081 IOPS][eta 00m:59s]
Jobs: 1 (f=1): [R(1)][56.2%][r=131MiB/s][r=1048 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [R(1)][61.2%][r=116MiB/s][r=931 IOPS][eta 00m:47s]
Jobs: 1 (f=1): [R(1)][66.1%][r=137MiB/s][r=1097 IOPS][eta 00m:41s]
Jobs: 1 (f=1): [R(1)][71.1%][r=107MiB/s][r=856 IOPS][eta 00m:35s] 
Jobs: 1 (f=1): [R(1)][76.0%][r=132MiB/s][r=1058 IOPS][eta 00m:29s]
Jobs: 1 (f=1): [R(1)][81.0%][r=159MiB/s][r=1275 IOPS][eta 00m:23s]
Jobs: 1 (f=1): [R(1)][86.0%][r=122MiB/s][r=974 IOPS][eta 00m:17s]
Jobs: 1 (f=1): [R(1)][90.9%][r=163MiB/s][r=1302 IOPS][eta 00m:11s]
Jobs: 1 (f=1): [R(1)][95.9%][r=130MiB/s][r=1043 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [R(1)][100.0%][r=191MiB/s][r=1526 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=148377: Wed May 24 02:44:40 2023
  read: IOPS=1089, BW=136MiB/s (143MB/s)(15.0GiB/120015msec)
    clat (usec): min=34, max=322488, avg=914.40, stdev=5603.28
     lat (usec): min=34, max=322489, avg=914.72, stdev=5603.29
    clat percentiles (usec):
     |  1.00th=[    43],  5.00th=[    56], 10.00th=[    63], 20.00th=[    74],
     | 30.00th=[    79], 40.00th=[    85], 50.00th=[    90], 60.00th=[    96],
     | 70.00th=[   119], 80.00th=[   127], 90.00th=[   149], 95.00th=[  5276],
     | 99.00th=[ 25297], 99.50th=[ 27395], 99.90th=[ 74974], 99.95th=[110625],
     | 99.99th=[185598]
   bw (  KiB/s): min=  256, max=296448, per=100.00%, avg=139731.01, stdev=43865.15, samples=239
   iops        : min=    2, max= 2316, avg=1091.65, stdev=342.70, samples=239
  lat (usec)   : 50=2.14%, 100=60.58%, 250=30.13%, 500=0.30%, 750=0.04%
  lat (usec)   : 1000=0.05%
  lat (msec)   : 2=0.09%, 4=0.21%, 10=4.75%, 20=0.29%, 50=1.27%
  lat (msec)   : 100=0.11%, 250=0.05%, 500=0.01%
  cpu          : usr=0.60%, sys=9.96%, ctx=9996, majf=0, minf=45
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=130801,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1


Run status group 0 (all jobs):
   READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=15.0GiB (17.1GB), run=120015-120015msec
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
It would appear my disks and/or sata controller is the cause of not getting full 2.5gbe bandwidth on the NIC? as these speeds seem to correlate roughly with the SMB transfer rates from NAS to my PC. Unless i am reading it completely wrong?
If you don't get max or near max speed within iperf tests there is a network issue; fio shows you what your pool is capable of doing in a set situation (determined by the parameters), basically how much of the iper test speed you are able to use.
 

shauno100

Dabbler
Joined
Oct 9, 2022
Messages
20
If you don't get max or near max speed within iperf tests there is a network issue; fio shows you what your pool is capable of doing in a set situation (determined by the parameters), basically how much of the iper test speed you are able to use.
So from my fio results above wouldn't this be the bottleneck in my case and why I'm not getting better bandwidth results in iperf? Or are these results sufficient enough that my iperf results should be better then what they are meaning a network issue as you mentioned?
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
763
So from my fio results above wouldn't this be the bottleneck in my case and why I'm not getting better bandwidth results in iperf? Or are these results sufficient enough that my iperf results should be better then what they are meaning a network issue as you mentioned?
So a couple of things here...
bw ( MiB/s): min= 228, max= 4788, per=100.00%, avg=1732.95, stdev=51.57, samples=3840
iops : min= 1824, max=38304, avg=13861.47, stdev=412.59, samples=3840
From an IOPS perspective, you have a fairly high delta between the min/max, and generally low performance. While your avg is fine, it does appear that the backing system or zpool (layout) is not quite able to keep up with the speeds posted as average. You can see that in a relatively large standard deviation in IOPS.

I ran the same test on my, admittedly high end system, with 8-way mirror of 10TB HDDs.

bw ( MiB/s): min=33450, max=51771, per=2.18%, avg=42878.54, stdev=76.30, samples=3824
iops : min=267598, max=414172, avg=343027.55, stdev=610.41, samples=3824

While my standard deviation in both bw and iops is higher than yours, the performance is far greater.
For fun, here's what 2 mirrors of 960GB Optane looks like:
bw ( MiB/s): min=24377, max=26172, per=-64.86%, avg=25445.75, stdev=12.83, samples=3824
iops : min=195022, max=209376, avg=203565.88, stdev=102.62, samples=3824
Which, funnily enough looks slower than my HDDs above, but in reality outside of this specific benchmark, they aren't! which kinda proves my point here- does any of this really matter?

I provide this comparison as I'm trying to give you a benchmark to compare against. I have no idea what your workload is nor what you are trying to do. What metric matters to you, IOPS or bandwidth?

From a disk performance perspective, super back of the napkin math, you are getting about 1/4 of the maximum performance of your pools worth of maximum network bandwidth. That ratio is fine, especially if IOPs don't really matter. Considering you have a cheap Realtek NIC and a relatively slow back-end pool, the system seems to be pretty much in homeostasis.
 
Last edited:
Top