Thanks for the info, in all honesty getting more bandwidth on the network is my main priority. I mainly store movies and video files on the NAS and stream to my devices at home, so no real high IOPS situations.So a couple of things here...
From an IOPS perspective, you have a fairly high delta between the min/max, and generally low performance. While your avg is fine, it does appear that the backing system or zpool (layout) is not quite able to keep up with the speeds posted as average. You can see that in a relatively large standard deviation in IOPS.
I ran the same test on my, admittedly high end system, with 8-way mirror of 10TB HDDs.
While my standard deviation in both bw and iops is higher than yours, the performance is far greater.
For fun, here's what 2 mirrors of 960GB Optane looks like:
Which, funnily enough looks slower than my HDDs above, but in reality outside of this specific benchmark, they aren't! which kinda proves my point here- does any of this really matter?
I provide this comparison as I'm trying to give you a benchmark to compare against. I have no idea what your workload is nor what you are trying to do. What metric matters to you, IOPS or bandwidth?
From a disk performance perspective, super back of the napkin math, you are getting about 1/4 of the maximum performance of your pools worth of maximum network bandwidth. That ratio is fine, especially if IOPs don't really matter. Considering you have a cheap Realtek NIC and a relatively slow back-end pool, the system seems to be pretty much in homeostasis.
So to confirm, would installing a well supported (driver wise) NIC likely give me better performance, or would a complete hardware upgrade including a better NIC be required due to my average fio results?
From the replies here it would seem that my Realtek 2.5Gbe USB adaptor and/or the RTL8125B NIC in my TrueNAS Scale machine are at fault (poor drivers) and my cheapest option for potentially better 2.5Gbe speeds is to buy a new NIC for the TRUENAS Scale box.
Last edited: