Poor transfer speeds through nfs share

dias

Dabbler
Joined
Nov 3, 2022
Messages
24
Hi,
I am connected to my truenas scale with 2.5g connection. My iperf3 results are around 2.37 gbit/sec however when i try to transfer files (Read or write) It is around 100 to 140 Megabytes/sec. I have set the number of servers to 4 (intel i5-6500 has 4 cores) and made my mount option in fstab file like this:

192.168.10.95:/mnt/wd/wd /media/faruk/wd nfs rw,hard,intr,rsize=131072,wsize=131072,timeo=14 0 0
192.168.10.95:/mnt/seagate/Seagate /media/faruk/seagate nfs rw,hard,intr,rsize=131072,wsize=131072,timeo=14 0 0

Also set the jumno frames to 9000 but didnt matter. Any settings should i look for?

Thanks in advance.
 
Joined
Dec 29, 2014
Messages
1,135
Your iperf results make it look like it isn't a network problem. That makes it a question of pool construction and hardware. It would be helpful (and consistent with forum rules) to include the full version of TrueNAS as well as a description of the hardware. I am not particularly good with fio, but that is probably what you need to use to determine what kind of I/O you are getting off of your pool.
 

dias

Dabbler
Joined
Nov 3, 2022
Messages
24
Hi Elliot, thanks for the answer i added my truenas specs as signature and here are my 2 pools below

Screenshot from 2022-11-12 19-34-03.png


and sorry i dont know what fio is
 
Joined
Dec 29, 2014
Messages
1,135
8G of RAM? That is well below the minimum supported is not 16G if I recall correctly. I suspect that the number of disks (more vdevs = more throughput) is hurting you, but primarily the lack of RAM is hurting your reads. My main pool has 8 mirror vdevs (see signature) and 256G of RAM. That allows me to get the 13 gigabit sustained read (40 gigabit storage network) with bursts above 16. Writing is 4-5gb with an NVMe SLOG. Your first need is more RAM. Then look at more vdevs if you case can support it. That is where you will get better I/O.
 

dias

Dabbler
Joined
Nov 3, 2022
Messages
24
8G of RAM? That is well below the minimum supported is not 16G if I recall correctly. I suspect that the number of disks (more vdevs = more throughput) is hurting you, but primarily the lack of RAM is hurting your reads. My main pool has 8 mirror vdevs (see signature) and 256G of RAM. That allows me to get the 13 gigabit sustained read (40 gigabit storage network) with bursts above 16. Writing is 4-5gb with an NVMe SLOG. Your first need is more RAM. Then look at more vdevs if you case can support it. That is where you will get better I/O.
As far as i know min. req is 8 gb Ram for simple things. I am not running any vms Just qbittorrent and using the machine for storage purposes only. Also i was getting way better performans when using Windows and smb share to connect this nas so i thought it may be nfs settings related. My free ram starts from 3 gb and drops around to 0.5 gb at most. Do you still think it is ram related?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
As far as i know min. req is 8 gb Ram for simple things. I am not running any vms Just qbittorrent and using the machine for storage purposes only. Also i was getting way better performans when using Windows and smb share to connect this nas so i thought it may be nfs settings related. My free ram starts from 3 gb and drops around to 0.5 gb at most. Do you still think it is ram related?
Quoting the Hardware Guide:
ZFS and, by extension, TrueNAS require a lot of RAM. The baseline for RAM sizing is 1GB per 1TB of storage. This rule is left deliberately vague.
The minimum requirement for TrueNAS is 8GB of RAM and lower quantities are not supported.
16GB is probably the sweet spot for most home users, but more RAM is generally an easy way of improving server performance.
Basically, except a few niche cases 16GB is the minimum RAM most users on this forum consider acceptable.

Which disk model are you using? SMR drives can cause issues, but from your signature that should be good.

What is the nature of your data and the recordsize of your pools? Quoting the Introduction to ZFS:
ZFS uses the recordsize property to define the maximum size of blocks. It applies per dataset, so different datasets can have different values for this property. Since ZFS uses variable-sized blocks, this setting only affects the maximum size of blocks. Although a larger recordsize may seem appealing at first, it is not appropriate for all situations. Larger blocks mean that fewer blocks need to be written, in turn reducing the amount of metadata required. Larger blocks, however, are more prone to write amplification, reducing performance and taking up more space in snapshots. Therefore, this property should be tuned down when dealing with databases, block storage and similar workloads; and tuned up when dealing with large, immutable files, such as video.
 
Last edited:

dias

Dabbler
Joined
Nov 3, 2022
Messages
24
Quoting the Hardware Guide:

Basically, except a few niche cases 16GB is the minimum RAM most users on this forum consider acceptable.

Which disk model are you using? SMR drives can cause issues, but from your signature that should be good.

What is the nature of your data and the recordsize of your pools? Quoting the Introduction to ZFS:
Thanks Davvo, i guess i Will upgrade my ram then, i didnt think ram might be the problem as i was getting 170-220 MBs White using smb share, files are mix some are small some of them big but i tried this speed tests from 300 mb to 4.5 gb files. Where can i check recordsize? I Will have a look at it when i return home.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Thanks Davvo, i guess i Will upgrade my ram then, i didnt think ram might be the problem as i was getting 170-220 MBs White using smb share, files are mix some are small some of them big but i tried this speed tests from 300 mb to 4.5 gb files. Where can i check recordsize? I Will have a look at it when i return home.
Well, I can't help you more than this. NFS is not my expertise.
You can find in the dataset proprieties iirc.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks Davvo, i guess i Will upgrade my ram then, i didnt think ram might be the problem as i was getting 170-220 MBs White using smb share, files are mix some are small some of them big but i tried this speed tests from 300 mb to 4.5 gb files.

Do you see any improvement in performance using soft NFS mounts? intr has also been deprecated/ignored for a long time (assuming a Linux client)

Where can i check recordsize? I Will have a look at it when i return home.

Unlikely to be the issue but look under the Dataset Advanced Options:

AddDatasetAdvancedOptionsScreenBottom.png


 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Basically, except a few niche cases 16GB is the minimum RAM most users on this forum consider acceptable.

The main problem with 8GB that *I* have observed, and I say this being the person who raised it to 8GB years ago, is that people take it to mean that they can run a VM and some jails and various taxing services, plus the guidance for 1GB per TB storage is theirs to ignore, and this is all supposed to work well in 8GB. Screw common sense.

I am not actually privy to why iXsystems raised the minimum memory requirement for Core to 16GB (see the download page). But I can imagine.

The bit that I don't understand is that the Scale minimum memory requirement is still 8GB, even with Linux's crappy memory management. This is the thing I would have expected to see at 16GB.

Either way, though, if you are expecting anything beyond bare bones low performance low traffic service, you should consider 8GB as too little.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
The bit that I don't understand is that the Scale minimum memory requirement is still 8GB, even with Linux's crappy memory management. This is the thing I would have expected to see at 16GB.
Imho, there is a lack of uniformity in iX's documentation even about CORE. Here and here for example still says 8GB.
I agree that given SCALE's nature the minimum should be 16 GB.
 
Last edited:
Top