Abmysal NFS write performance

draggy88

Dabbler
Joined
Jun 19, 2015
Messages
19
hi, using nfs 4. I see really bad performance running FIO writes to the Truenas server via NFS. Any tips to what to do to improve it ?

Server:
Dual Intel Xeon E5-2640V3 8(16HT)Cores 128GB mem, 4x10gbe
Storage: 12x4TB Enterprise SATA @Raidz2
OS:Truenas 12.0u3

Client:
Dual Intel Xeon E5-2640V3 8(16HT)Cores 128GB mem, 4x10gbe
Redhat En 8.3x64

mount syntax:
sudo mount -t nfs IP:/mnt/test2/Storage2 /opt/NAS-SHARE2


fio -name=random-write --ioengine=posixaio --rw=randwrite --bs=64k --size=256k --numjobs=8 --iodepth=2 --runtime=60 --time_based --end_fsync=1

FIO results direct on the server in console:
Run status group 0 (all jobs):
WRITE: bw=9.81GiB/s (10.5GB/s), 1220MiB/s-1272MiB/s (1279MB/s-1334MB/s), io=601GiB (645GB), run=61212-61225msec

NFS client;
Run status group 0 (all jobs):
WRITE: bw=29.7MiB/s (31.2MB/s), 3788KiB/s-3835KiB/s (3879kB/s-3927kB/s), io=1789MiB (1876MB), run=60118-60173msec
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You could try showing the reasons for this by setting sync=always on the dataset when you run it locally and see if the performance drops to the same as the NFS client.

You could conversely try sync=disabled and see if the client speeds up.

(zfs set sync=always pool/dataset)

It's probably really important to mention the consequences of setting sync=disabled since I mentioned it... If your NFS client is asking for sync writes, it expects that the data written is on disk, which will not necessarily be the case with sync=disabled... the data may be (at least partially) only in RAM, hence not safe and could be lost with a power-cut to the server. Take care if using that setting and understand what you really need. https://jrs-s.net/2019/05/02/zfs-sync-async-zil-slog/

If changing that setting doesn't make anything different, look at the network. Have you enabled jumbo frames?
 
Last edited:

draggy88

Dabbler
Joined
Jun 19, 2015
Messages
19
@sretalla im running without Jumboframes for now. I wanna test other optimalisations before i turn on jumbo.
Anyone know where in the UI or console, where i can check the current rsize and wsize for the NFS service? I tried tuning this on the client to change..
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
RAIDz2 has to receive confirmation of data commit to all the parity stripes before returning a write complete. NFSv4 by RFC is "O_SYNC" always, so you're effectively getting the write performance of a single device. Try breaking your pool up into two 6 drive vdev's, or three 4 drive vdevs. That way it can round-robin between them and get some parallelization.
 
Top