NFS Tuning for backups

Status
Not open for further replies.

nriedman

Dabbler
Joined
Jul 19, 2012
Messages
20
So I have been scouring the forums looking to maximize our investments in hardware, but I haven't found much useful info yet, so if this is a repeat of a previous thread, just point me in that direction and I will continue reading. We are using Freenas as a repository for our backups from Veeam, RSYNC, Acronis, etc. so I am not looking for the fastest I/O just a good value of disk write performance and capacity.

We have been using various Freenas boxes over the past several years for this operation, but we have just acquired some new hardware via (45drives). The specs are as follows:
Motherboard - X10DRL
CPU - Dual E52620 v4
RAM - 256GB ECC (I have seen soooo many threads referencing lack of memory and we have run into that as well on other devices so I wanted to give this unit plenty)
HBA - 2 Rocket 750
NIC - 10GB optical
Drives - 30 - 8TB WD Purple 5400RPM SATA 6Gb/s 128MB cache

I create 3 volumes of 10 drives each with RAIDZ2. In testing the first volume from one of our backup servers we have created a CIFS and NFS share. I would prefer to use NFS, but in initial tests from a windows server with a 1GB NIC the transfer rates were:
NFS - 45Mb/s
CIFS - 110Mb/s

I have tested iperf to 930Mb/s which is to be expected on the network. Any ideas for better write performance?
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
Rocket hbas are not known to be the "best" choice.
Do you use nfs in sync mode?
For testing try setting zfs sync disabled.


Gesendet von iPhone mit Tapatalk
 

nriedman

Dabbler
Joined
Jul 19, 2012
Messages
20
Disabling ZFS sync on the dataset yielded the following:
NFS - 75MB/s
CIFS - 110MB/s
 

nriedman

Dabbler
Joined
Jul 19, 2012
Messages
20
Also running
dd if=/dev/zero of=testfile bs=1024 count=50000 yields 174393273 bytes/sec (~1.4Gbps) Write
dd if=testfile of=/dev/zero bs=1024 count=50000 yields 74880736 bytes/sec (~0.6Gpbs) Read
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
dd with bs=1024 is a very bad test -- it tests CPU overhead, not I/O speed. Increase the block size up to 64-128KB or more.
 

nriedman

Dabbler
Joined
Jul 19, 2012
Messages
20
Alright...
1K Write - (158848357 bytes/sec)
1K Read - (63191151 bytes/sec)
128K Write - (2745915629 bytes/sec)
128K Read - (5284116809 bytes/sec)
512K Write - (2746692201 bytes/sec)
512K Read - (6370575055 bytes/sec)
1M Write - (2767825445 bytes/sec)
1M Read - (6878461146 bytes/sec)
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
This is now too good to be true. Probably because of compression enabled. Typical problems for this forum. :)
 

nriedman

Dabbler
Joined
Jul 19, 2012
Messages
20
Each volume is created with default options and lz4 is enabled. This is on initial setup so I can recreate the volumes. Should I do so and turn compression off?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
For real data compression is good to have. But it makes your benchmark quite meaningless. You may disable compression for time of benchmark and then reenable it back.
 

nriedman

Dabbler
Joined
Jul 19, 2012
Messages
20
Alright...with compression turned off on the volume
1K Write - (158777889 bytes/sec)
1K Read - (411820567 bytes/sec)
128K Write - (1912261713 bytes/sec)
128K Read - (6210407323 bytes/sec)
512K Write - (1045145421 bytes/sec)
512K Read - (5492036171 bytes/sec)
1M Write - (917473419 bytes/sec)
1M Read - (5968735490 bytes/sec)

all done via : (dd if=/dev/zero of=testfile bs=1M count=50000) and (dd if=testfile of=/dev/zero bs=1M count=50000)

I am just trying to get a sense of what to expect NFS and CIFS throughput to be given our hardware, direct read/write capability, and network specs. It seemed to me that 45MB/s was quite slow and I am trying to track down if it is a configuration problem or some hardware.
 
Status
Not open for further replies.
Top