NFSd: (max) wsize and rsize settings

Status
Not open for further replies.

dreamworks

Cadet
Joined
Nov 27, 2016
Messages
2
Hi guys,

SPEC:
Site 1: FreeNAS-9.2.1.7-RELEASE-x64
Site 2: FreeNAS-9.10.1-U4

I use the same linux client (CentOs) connecting to both FreeNas using the following fstab entries:

10.19.10.2:/mnt/vStoreEnc /data/backup nfs vers=3,async,rsize=524288,wsize=524288,timeo=14,noatime,nodiratime,intr
10.1.10.2:/mnt/vStoreEnc /data/backup.rz1 nfs vers=3,async,rsize=524288,wsize=524288,timeo=14,noatime,nodiratime,intr

As you notice I have rsize and wsize configured quite big.

When I now check /proc/self/mountstats on my linux client I get the following output:

device FreeNAS-9.2.1.7-RELEASE-x64:/mnt/vStoreEnc mounted on /data/backup with fstype nfs statvers=1.1
opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,hard,proto=tcp,timeo=14,retrans=2,sec=sys,mountaddr=10.19.10.2,mountvers=3,mountport=669,mountproto=udp,local_lock=none

device FreeNAS-9.10.1-U4:/mnt/vStoreEnc mounted on /data/backup.rz1 with fstype nfs statvers=1.1
opts: rw,vers=3,rsize=131072,wsize=131072,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,hard,proto=tcp,timeo=14,retrans=2,sec=sys,mountaddr=10.1.10.2,mountvers=3,mountport=965,mountproto=udp,local_lock=none

So the rsize and wsize settings are different between the FreeNas version. As I understand this seems to be a nfsd setting, but I neither a FreeBSD nor a FreeNas pro.
I wonder where and how I can set and increase them? I assume it's a tunable but which one? Where would I find more documentation on it?

Any hints would be appreciated...
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Those 64K and 128K are hardcoded upper limits in FreeBSD NFS server. The value was increased from 64k to 128k about 1.5 years ago to better match default ZFS record size.
 

dreamworks

Cadet
Joined
Nov 27, 2016
Messages
2
Hi Mav,

many thx for the reply.. From checking the source I already suspected that those values are hard-coded.
Anyhow from what I can observe the difference betwen 64 K and 128K TRIPPLES the nfs performance (on GBit ethernet)

From my general understanding those values primarly affect network transfer speeds over nfs and are not necessarily zfs related...

Or to be more exact with an example doing a find over a bigger directory structure ~ 250.000 files, ~ 23 GB

(1) Directly on the FreeNas
time find . | wc -l
248356

real 0m0.797s
user 0m0.236s
sys 0m0.667s

(2) Using NFS with 128 KB wsize,rsize
time find . | wc -l
248356
real 0m21.784s
user 0m0.525s
sys 0m3.180s

(3) Using NFS with 64 KB wsizes,rsize

time find . | wc -l
248356

real 1m10.216s
user 0m0.573s
sys 0m4.799s


So we have ONE sec directly on the NAS, 21 secs with 128KB and 1 MINUTE and 10 seconds with 64KB (no jumbo frames, standard MTU 1500)
I have to say I am something between impressed, surprised and irritated by those differences.

From what I read/understand those "low" values date back from old 10 MBit times and act like a hand brake to FreeNas.

Not sure what can be done here, as this would require the FreeNas team to also build/use/provide a non-standard/tuned kernel..
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I am surprised. Motivation behind the increase was to avoid read-modify-write pattern in ZFS during large writes. Not sure why it affected directory listing that much. On the other side, it may be some other unrelated change between FreeBSD 9.10 and 10.3. Or you've tested this om the same FreeNAS system just changing mount properties on initiator?
 
Status
Not open for further replies.
Top