Slow write performance to a NFS share

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
See post #6 for an updated status.

Hello,

I have a VM running on my freenas box. The purpose of the VM is to run a burp backup server. Inside the VM, I mounted my dataset with a SMB share hosted by the same NAS. The burp server stores the backup on this mount. When a client is backuping, I have very low write performance that results with very long backup time.

I did two backups from the same client: one is storing on my smb share, the second is storing directly on my VM volume.

First backup took almost 3 hours. The second took only about 25 minutes.

My share is mounted with fstab:
//x.x.x.x/BACKUP /var/local/backups cifs credentials=/etc/cifs_auth_backup,sec=ntlmssp,uid=admin,gid=access_backup,iocharset=utf8

When I run this dd command inside my VM to my mount directory, write performance seem fine:
# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.84061 s, 121 MB/s

Any idea how to improve my backup performance?

Thanks.
 
Last edited:

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
Also (very) slow when using NFS. Speed is comparable with what I get with SMB.

I guess I could attach a zvol to my VM, but I would have to deal with a fixed size volume. I would like to avoid that...
 
D

dlavigne

Guest
Were you able to resolve this or figure out the reason for the slowness?
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
No still didn't find the source of my problem.

I validated the problem was not related with the network. iperf from inside the VM and the freenas host give results over 1 gbps.

I have some inconsistency with my timings. I got 3 hours once but I didn't repeated this. I tried with different mount parameters with both nfs and cifs. Smb gave still slow, but better results at about 50 minutes, but I found later I can't use cifs because it blocks filename with colons. So I have to use NFS.

My better time with NFS is almost 2 hours with theses mounts options:
nas.xxxx.xx:/mnt/pool0/BACKUP/Burp /var/local/backups/Burp nfs rsize=32768,wsize=32768
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
The problem is not related with bhyve either. I started a new VM on my proxmox host. I mounted the NFS share on it and installed the burp server. The backup job is not over yet, but already over 1 hour. I expect to have similar resultt than with my bhyve V'M.

As you can see on the following graph (network usage on the VM being backuped), the speed seem to be decreasing over time (backup starts at 10:35). Any idea why?
ZfYVdE7FZEyK.png
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
Well, the problem is not burp either... seem like the problem is really with the NFS client/server (I think I eliminated all others possibles causes).

For each backups, burp creates one directory on the share. The directory contains all files presents on the server being backuped (gzipped). The directory hierarchy is preserved.

Files stats of the archive:
Files: 402199
Meta data: 3
Directories: 99013
Hard links: 9
Soft links: 10838
Special files: 33
Grand total: 512095
Size: 5 GB

So, lets exclude burp from the equation by copying the same data but with another tool (rsync).

Testing NFS read performance:
On the NFS client, lets copy the backup archive on the local drive (in fact, the NFS client being running on a bhyve VM, the write is done on a local ZFS zvol).
rsync -av /mnt/nfs/clientbackup/ /tmp/clientbackup/

Completion time: about 5 minutes, seem good!

Testing NFS write performance:
Still on the NFS client, lets copy the local backup archive back to the NFS share.
rsync -av /tmp/clientbackup/ /mnt/nfs/clientbackup.test/

Completion time: about 110 minutes, omg this is really slow! Time is similar with what I get with a burp backup session.

What could cause that? Why the read performance is good while the write is really bad? If I do the rsync task directly on my FreeNAS box, performance is good (less than 5 minutes), so the problem is not because of hardware/disks or server being too busy. I also already validated that the network performance between the FreeNAS box and the VM is good. I don't understand what's wrong with that...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
disable sync writes on the NFS share and see if the problem magically goes away

zfs set sync=disabled pool/share

where pool/share is your dataset name. (not mount point)

If the problem does go away... then we should take it from there... if not... well.. i dunno.

restore defaults with zfs set sync=standard pool/share
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
disable sync writes on the NFS share and see if the problem magically goes away

zfs set sync=disabled pool/share

where pool/share is your dataset name. (not mount point)

If the problem does go away... then we should take it from there... if not... well.. i dunno.

restore defaults with zfs set sync=standard pool/share

Hi,

The write rsync tooks only 6 minutes this time!

But I understand that disabling write sync may result in data loss.

Why write sync cause slow down with NFS but not on local storage?

Thanks.
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
Well knowing that write sync is the source of my problem, I modified my google search keywords. Seem like a SLOG on a SSD could help to increase NFS performance (most posts were about storing VMs disks on NFS).

Problem I have with this is I don't have M2/SATA ports availables on my server, so I would also need to add a PCI-E SATA card. Also, all of this seem not required if I was not using NFS.

My use case is not to store VMs like I always read on posts about slow NFS performance. I wonder if there is another solution that would suit for my use case of storing backup on the NFS share?
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
Have you tried just rsyncing over SSH instead of using shares?
Hello,

The rsync is just to test the problem was not with my backup solution. The real use is to use a burp server.
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
So the only solution is to disable zfs sync on my dataset?

The way I understand why NFS is slow (compared to when doing the same task directly on the zfs file system) is because NFS is always making sync even when not requested by the underlying application. There is no way to force the NFS client/server to sync only when requested?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Reading through the thread it looks like you already have your solution:
- Disable sync on the dataset
Or
- Use a SLOG. Maybe even just SATA though you are more likely to go for a PCIe attached one. These don’t have to be large at all; there are some threads about people using fairly inexpensive NVMe solutions.

The point of sync write is to protect against the storage becoming unavailable during a write. You are writing backup files, and you’re happy to store them on SMB (as am I by the way), so the obvious solution here is to disable sync on the dataset used for backup.
 

f4242

Explorer
Joined
Mar 16, 2017
Messages
97
My solution was to replace the VM with a bsd jail and use "local storage" of the NAS instead of network share.
 
Joined
Jun 25, 2019
Messages
6
Reading through the thread it looks like you already have your solution:
- Disable sync on the dataset
Or
- Use a SLOG. Maybe even just SATA though you are more likely to go for a PCIe attached one. These don’t have to be large at all; there are some threads about people using fairly inexpensive NVMe solutions.

The point of sync write is to protect against the storage becoming unavailable during a write. You are writing backup files, and you’re happy to store them on SMB (as am I by the way), so the obvious solution here is to disable sync on the dataset used for backup.
Thank you so much for your speedy reply. I am a total Noob when it comes to freenas. I somehow managed to set up a Dell Poweredge with freenas and its running fine..But I want to optimise the performance as much as possible. I run a film audio post production facility. A small company. I don't have any IT support or anything. Your videos online and your reading material helped me immensely in getting set up so far. I have never heard of anything like SLOG before. But after I read what you said..I bumped into the following article.


https://www.ixsystems.com/blog/zfs-zil-and-slog-demystified/


I understand a little bit. Pls correct me if i am wrong. What this means is..If i install an SSD and set up SLOG, I would get all Flash storage performance ? If that's the case..I already have freenas running on a mirrored RAID SSD. Can I use that for SLOG ? (The dell Poweredge I am using R510, Has 14 HDD Bays. 12 for NAS Storage and 2 for running OS. I have installed 2 X SSDs as mirrored Raid for OS. and 6 Bays have SAS 4 TB Drives and the other 6 are empty as of now. I intend to use it when I need in the future). Given this scenario, is it possible to use my existing SSD's for SLOG ? If so can you pls point in the right direction on how to do this ? perhaps an article or a video ?

Also is there a paid remote support for tech support ? For a small company like mine a paid tech support can be of great use.

Thanks again for all your replies. Truly appreciate it.
 

douglasg14b

Dabbler
Joined
Nov 26, 2017
Messages
26
So... I have SLOG device, a 400GB Intel DC S3710. I get very slow writes with NFS. If I disable sync. the writes are quite fast again.

Having a SLOG device didn't seem to help a whole lot with NFS, is there some other solution other than disabling sync?

Below are screenshots of some perf tests. The ISCSI ones are meant to be a baseline, indicating that this is not a networking or a disk issue. This is a pool of 5x SATA3 SSDs in RADZ1.

I'd expect to get ~300MB/s with NFS, rather than 189MB/s.

1606722317751.png


1606722364222.png


1606769152107.png


1606727861083.png


1606722695633.png


1606723011696.png

firefox_OrxkNNPY57.png
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912

alecz

Dabbler
Joined
Apr 2, 2021
Messages
18
Meanwhile writes are pretty fast over NFS to a ZFS on Linux box :frown: (because the Linux NFS Server support 1 MiB read/write size while the FreeBSD NFS server is stuck with 128KiB)
 
Top