NFS/SMB/ISCSI performance issues

jadechessink

Dabbler
Joined
Jul 7, 2015
Messages
26
i recently switched from OMV to Truenas core and Ive been dealing with some horrible performance issues recently cause of it, i cant figure out why.

Server details:
Version:
TrueNAS-13.0-U5.3
5x mirrored SSD pool for a total of 10 ssd.
Supermicro X10SRW-F
single intel xeon e5-1650
48gb ram
intel I350 1gb dual port - lagged - management
intel x520 10gb dual port - lagged- storage
Jumbo frames enabled

Write speed for pool @~600mbps on average tested with fio:
Code:
fio --name=fiotest --filename=/mnt/data --size=16Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60


iperf over 10gb interface is at about 7gbps.

Same end client used to iperf testing is getting about 70mbps no matter if i test using ISCSI, NFS, or SMB. when traffic is generated i can confirm its on the 10gb interface via the dashboard and not the 1gb. same thing confirmed on the client server using iftop and confirmed the traffic was leaving the 10gb interface.

Code:
 WRITE: bw=39.6MiB/s (41.6MB/s), 39.6MiB/s-39.6MiB/s (41.6MB/s-41.6MB/s), io=2386MiB (2502MB), run=60180-60180msec


Ive tried looking in the logs for Truenas but ive been unsuccessful in finding anything relevant.

any assistance would be appreciated.
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
i recently switched from OMV to Truenas core and Ive been dealing with some horrible performance issues recently cause of it, i cant figure out why.

Server details:
Version:
TrueNAS-13.0-U5.3
5x mirrored SSD pool for a total of 10 ssd.
Supermicro X10SRW-F
single intel xeon e5-1650
48gb ram
intel I350 1gb dual port - lagged - management
intel x520 10gb dual port - lagged- storage
Jumbo frames enabled

Write speed for pool @~600mbps on average tested with fio:
Code:
fio --name=fiotest --filename=/mnt/data --size=16Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60


iperf over 10gb interface is at about 7gbps.

Same end client used to iperf testing is getting about 70mbps no matter if i test using ISCSI, NFS, or SMB. when traffic is generated i can confirm its on the 10gb interface via the dashboard and not the 1gb. same thing confirmed on the client server using iftop and confirmed the traffic was leaving the 10gb interface.

Code:
 WRITE: bw=39.6MiB/s (41.6MB/s), 39.6MiB/s-39.6MiB/s (41.6MB/s-41.6MB/s), io=2386MiB (2502MB), run=60180-60180msec


Ive tried looking in the logs for Truenas but ive been unsuccessful in finding anything relevant.

any assistance would be appreciated.
A few things to consider:
How many threads did you use for iPerf?
iPerf3 is single threaded by default which might be a bottleneck. Although 70mbps seems well below a single iPerf3 thread's throughput.

Can you give a few infos on your pool?
How full is it? Performance will degrade with higher utilization.
Did you enable atime or sync writes?
How is SMB configured?
Did you test with a single 10Gb port instead of the lagg?
 

jadechessink

Dabbler
Joined
Jul 7, 2015
Messages
26
Apologies i should have specified, Iperf is a single thread test, but was getting 7gbps, 70mbps was testing the ISCSI/NFS/CIFS using FIO.

the pool is a 9ish tb pool, it currently has 12gb on it.
i tried with testing performance with standard, always, and never on the pool, but there was no difference.

cifs/NFS/and iscsi are all configured pretty plainly, bare minimum needed to work.

If i remove a port from the lagg i get the same results. Thanks for taking a look at this.

Cifs:
1708274021954.png

1708274130641.png


NFS:
1708274166094.png

1708274095456.png


LAGG:
1708274405388.png
 

jadechessink

Dabbler
Joined
Jul 7, 2015
Messages
26
after some tinkering i got some better performance out of Truenas core in all areas.

I updated and rebooted Truenas core which brought iscsi and smb up to the 700mbps area that i was expecting in terms of performance.

NFS however was still anywhere between 40-70mbps, i had previously tested with sync set to standard/always/disabled and had no change in results, however when i set the pool to disabled i got a considerable performance increase at the 700mbps mark expected which is really quite unfortunate as both ISCSI and SMB did not need this to be disabled to get those speeds.

I'm starting to believe there's some sort of fighting with SYNC between zfs and nfs. Any suggestions on what i can do to bring NFS up to the 700 area with sync set to always?
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
after some tinkering i got some better performance out of Truenas core in all areas.

I updated and rebooted Truenas core which brought iscsi and smb up to the 700mbps area that i was expecting in terms of performance.

NFS however was still anywhere between 40-70mbps, i had previously tested with sync set to standard/always/disabled and had no change in results, however when i set the pool to disabled i got a considerable performance increase at the 700mbps mark expected which is really quite unfortunate as both ISCSI and SMB did not need this to be disabled to get those speeds.

I'm starting to believe there's some sort of fighting with SYNC between zfs and nfs. Any suggestions on what i can do to bring NFS up to the 700 area with sync set to always?
If you have a spare SSD try setting up a LOG device. That should help with sync write performance.
 

jadechessink

Dabbler
Joined
Jul 7, 2015
Messages
26
Not for a SSD only pool like OP has.
Btw, what’s the make and model of your SSDs?
The SSD's installed are enterprise grade Micron SATA SSD's, not the fastest in terms of performance, but they last a long time.

@asap2go
i set up an NVMe mirror log and it didnt make a difference, actually made it a smidge worse by about 50mbps.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
What do you mean with "mbps"?
 
Joined
Feb 22, 2017
Messages
29
however when i set the pool to disabled i got a considerable performance increase at the 700mbps mark expected which is really quite unfortunate as both ISCSI and SMB did not need this to be disabled to get those speeds.
NFS used to use UDP, so sync was kind of mandatory. Now it uses TCP, so sync isn't really required.

You might check this thread: https://www.truenas.com/community/threads/why-is-smb-async-and-nfs-sync.107227/

Long story short. . .different share methods use different methods of sync'ing data. You might want to check and see if your iSCSI/SMB shares are *forcing* sync writes as by default, they may not be. Just a thought.
 
Top