How does one improve NFS write performance?

Status
Not open for further replies.

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
I'm getting miserable write performance out of my newly built FreeNAS server when writing to it over NFS from a Linux (Debian) machine. I'm experiencing low overall speeds, and sometimes the catatonic states that jgreco mentions in https://bugs.freenas.org/issues/1531 . My catatonic states sometimes lasts for about 20-30 seconds.

FreeNAS machine specs:
Motherboard: ASRock E3C226D2I
CPU: Intel Core i3-4130T (supports AES-NI)
Memory: Kingston 16GB DDR3 1333MHz ECC unbuffered (2x8) KVR1333D3E9SK2/16G
Hdd1 (ada0): Hitachi Deskstar 4TB HDS724040ALE640
Hdd2 (ada1): Western Digital Red 4TB WD40EFRX-68WT0N0

I first just installed FreeNAS on a USB stick, booted up, and created an encrypted, mirrored ZFS volume with lz4 compression, no atime, no dedup, and then a dataset within that volume (same settings regarding compression, encryption, atime and dedup). Then NFS exported that dataset.

Copying a 963GB directory structure from the Linux system (ext4) that "df -k" says is 1008909496k large took just over 24 hours! That comes in at just below 12MB/s, which is not cool. And the FreeNAS CPU usage never went above 20%, so to me it doesn't look like it's the CPU that's holding the system back.

I've since done a lot of tests, both dd'ing files and /dev/zero locally on the FreeNAS machine, and dd'ing files and /dev/zero and cp'ing files over NFS from the Linux machine to FreeNAS. Local FreeNAS performance is good regardless of compression/encryption, and read performance from FreeNAS to Linux is really good.

I've tested network performance with iperf, and I do get gig speeds between the Linux and FreeNAS machines in both directions.

I've dd'd 110GB (mpeg2 data) from Linux to FreeNAS over nc, writing it to the dataset, and achieved 102MB/s, so when taking NFS out of the picture things perform just fine. Reading that 110GB file back to Linux over NFS (dd'ing it to /dev/null) also works fine, achieving 97MB/s read performance over NFS.

But NFS write performance sucks... there are a few 90MB/s peaks, but sometimes is drops to below 1MB/s, and an average of 12MB/s is plain not aceptable or usable.

So does anyone have any tips on what settings are best for a Linux NFS client accessing a FreeNAS (9.2.1.1) NFS server? Should tuning be done on the Linux or FreeNAS side? What needs to be tuned?

I don't have any FreeBSD, Windows, or OSX machines to use as clients to find where the problem is or do additional testing, unfortunately.

I've searched both on these forums and using Google, but haven't found anything that works for me. I've looked at the Linux NFS-Hotwo but it's recommendations on "Optimizing NFS Performance" has yielded nothing. Adjusting wsize/rsize on the Linux machine has neglible impact.

Or should I just give up on NFS and use CIFS or SSHfs instead? (For some reason I can't get CIFS running under FreeNAS (it can't find winbind, smbd or nmbd), but I'm guessing I've just made some mistake somewhere, or need to install some additional software on the FreeNAS machine.)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I've dd'd 110GB (mpeg2 data) from Linux to FreeNAS over nc, writing it to the dataset, and achieved 102MB/s, so when taking NFS out of the picture things perform just fine. Reading that 110GB file back to Linux over NFS (dd'ing it to /dev/null) also works fine, achieving 97MB/s read performance over NFS.

Actually, you took NFS out of the picture, along with network stacks, potential performance losses from the client side, and other issue. Now, the question is, how do you determine from all of the issues you excluded which one was the problem. ;) The scientific method at its best.


I will tell you that sending large numbers of small files always sucks in the performance arena. There's no getting around it and you just have to accept that. Also, if you were getting 90MB/sec peak then that tells me the server is more than capable of serving/receiving at those speeds and the issue isn't on the server side.

My first instinct based on reading these forums for 2 years is:

-Your source drive(s) are platter based media.
-Your files were large numbers of small files and not multi-TB files
-Your disks are heavily fragmented.

The bottom line is that if you aren't using an SSD, performance will tank if anything causes seeking of your hard drive. If your hard drive has a 4ms seek time it takes just 20 or so seeks in 1 second to end up with a sub-1MB/sec transfer rate. This is why SSDs make an old computer feel state of the art. If you aren't using an SSD, and you aren't using files that are huge, then your numbers can and will suck. I've got a 10Gb link to my server and I can still get <20MB/sec despite being on an SSD.

So from what I've read everything is completely normal and I don't see anything being "broken" unless you're going to tell me your 1TB of data was a single file and it was on an SSD. ;)
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
What options are you using when mounting the NFS export on your Linux client? (just checking that you aren't forcing sync writes)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I think eraser made a good point.. can you provide your fstab entry or mount command you are using?
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
First, thanks for the quick replies. I've been running a few more tests to give some more input, hence my delay in getting back you you guys.

I confess I'm no scientist. :) But I don't see how I took network stacks and potential performance losses from the client side out of the picture, as the nc test still reads from the client's disks and sends data over the network.

To be clear, this is what I ran on the FreeNAS machine for that test:
freenas# nc -l 9999 > /mnt/volume1/test/much_mpeg2_data.dd
freenas#

And this ran on the Linux client:
linux$ ( for n in 1 2 3 4 5 6 7 8 9 10 ; do dd if=/mnt/large_mpeg2_file bs=64k ; done ) | nc 192.168.0.100 9999
linux$

(I don't have the output any more, unfortunately.)

Since I only have 4GB RAM (and no swap) in the Linux machine, and /mnt/large_mpeg2_file is 11GB, it was not cached, it was read over and over again from disk. And dd'ing /mnt/large_mpeg2_file to /dev/null locally on the Linux machine gives me 103MB/s read speed.

What options are you using when mounting the NFS export on your Linux client? (just checking that you aren't forcing sync writes)
The /etc/fstab entry on the Linux machine for the FreeNAS share is:
linux$ grep 192.168.0.100 /etc/fstab
192.168.0.100:/mnt/volume1/test /opt nfs noauto,vers=3,nolock,tcp,fg,intr,rsize=16384,wsize=16384,noatime,nodiratime 0 0
linux$

And the output of the "mount" command shows this:
linux$ mount | grep 192.168.0.100
192.168.0.100:/mnt/volume1/test on /opt type nfs (rw,noatime,nodiratime,vers=3,rsize=16384,wsize=16384,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.100,mountvers=3,mountport=633,mountproto=tcp,local_lock=all,addr=192.168.0.100)
linux$

My first instinct based on reading these forums for 2 years is:
-Your source drive(s) are platter based media.
-Your files were large numbers of small files and not multi-TB files
-Your disks are heavily fragmented.
Regarding cyberjock's instincts:
- Yes, the source media is spinning (2 mirrored 2TB disks using ext4).
- 597 files, 368 of those being larger than 1GB, and 140 being smaller than 100MB.
- Dunno about fragmentation, it's possible but check the read performance tests I do further down; it can't be that heavy.

The 963GB directory structure copy from Linux to FreeNAS that took ~24h was the previously mentioned 597 files.

So, to find out more, I simply did this on the Linux machine to test the read speed of the 597 files:

linux$ time ( for n in $(find /mnt -type f); do dd if=$n of=/dev/null bs=64k ; done ) 2>&1 | tee dd-outputs | grep copied
1481828220 bytes (1.5 GB) copied, 17.5272 s, 84.5 MB/s
157440224 bytes (157 MB) copied, 1.63952 s, 96.0 MB/s
1505986032 bytes (1.5 GB) copied, 18.6692 s, 80.7 MB/s
1477675112 bytes (1.5 GB) copied, 27.9347 s, 52.9 MB/s
7121713728 bytes (7.1 GB) copied, 106.994 s, 66.6 MB/s
3279250724 bytes (3.3 GB) copied, 40.3898 s, 81.2 MB/s
106132392 bytes (106 MB) copied, 1.14979 s, 92.3 MB/s
4407778252 bytes (4.4 GB) copied, 38.1292 s, 116 MB/s
2324604960 bytes (2.3 GB) copied, 25.4318 s, 91.4 MB/s
2702 bytes (2.7 kB) copied, 0.0234893 s, 115 kB/s
3657613696 bytes (3.7 GB) copied, 48.3833 s, 75.6 MB/s
2981480344 bytes (3.0 GB) copied, 36.903 s, 80.8 MB/s
2882005032 bytes (2.9 GB) copied, 37.5538 s, 76.7 MB/s
3652205312 bytes (3.7 GB) copied, 44.9742 s, 81.2 MB/s
1942237024 bytes (1.9 GB) copied, 19.5388 s, 99.4 MB/s
1286864302 bytes (1.3 GB) copied, 15.3013 s, 84.1 MB/s
2542 bytes (2.5 kB) copied, 0.0328737 s, 77.3 kB/s
1487872000 bytes (1.5 GB) copied, 15.8032 s, 94.1 MB/s
2456 bytes (2.5 kB) copied, 0.0157427 s, 156 kB/s
3370772884 bytes (3.4 GB) copied, 35.0967 s, 96.0 MB/s
[...]
3963480672 bytes (4.0 GB) copied, 53.9457 s, 73.5 MB/s
94176532 bytes (94 MB) copied, 1.0104 s, 93.2 MB/s
3774083832 bytes (3.8 GB) copied, 34.2281 s, 110 MB/s
2609 bytes (2.6 kB) copied, 0.0152736 s, 171 kB/s
real 212m40.143s
user 0m16.830s
sys 27m33.840s
linux$

It's not an exact measurement but I think it should give a pretty good idea of what read speeds to expect. Doing some math on the numbers gives the following:

Total bytes:
linux$ grep "copied" dd-outputs | cut -d' ' -f1 | awk '{s+=$1}END{print s}'
1033120488977
linux$

Time (seconds):
linux$ grep "copied" dd-outputs | cut -d' ' -f6 | awk '{s+=$1}END{print s}'
12756.6
linux$

Average speed (MB/s):
linux$ echo "$(grep "copied" dd-outputs | cut -d' ' -f1 | awk '{s+=$1}END{print s}') / $(grep "copied" dd-outputs | cut -d' ' -f6 | awk '{s+=$1}END{print s}') / (1024*1024)"| bc -l
77.23535080589116327069
linux$

Alternative calculation of average speed (MB/s):
linux$ echo $( (grep "copied" dd-outputs | grep "MB/s" | cut -d' ' -f8 | awk '{s+=$1}END{print s}' ; grep "copied" dd-outputs | grep "kB/s" | cut -d' ' -f8 | awk '{s+=$1/1024}END{print s}') | awk '{s+=$1}END{print s}' ) / $(grep "copied" dd-outputs | wc -l) | bc -l
65.67051926298157453936
linux$

And the alternative calculation without all the small files (MB/s):
linux$ echo $(grep "copied" dd-outputs | grep "MB/s" | cut -d' ' -f8 | awk '{s+=$1}END{print s}') / $(grep "copied" dd-outputs | grep "MB/s" | wc -l) | bc -l
80.63374485596707818930
linux$

(Again, I'm no scientist ;) , so I don't know why the big difference in MB/s results, but it seems the small files skews the numbers. It's been 15+ years since I last studied math or statistics...)

Highs and lows (MB/s), excluding the smaller files:
linux$ egrep "^[0-9]{6}" dd-outputs | grep "MB/s" | sort -t' ' -k +8nr
1523720192 bytes (1.5 GB) copied, 12.9614 s, 118 MB/s
3580509632 bytes (3.6 GB) copied, 30.8666 s, 116 MB/s
4407778252 bytes (4.4 GB) copied, 38.1292 s, 116 MB/s
1508525160 bytes (1.5 GB) copied, 13.1075 s, 115 MB/s
2328094720 bytes (2.3 GB) copied, 20.2837 s, 115 MB/s
2912370416 bytes (2.9 GB) copied, 25.3089 s, 115 MB/s
3007993232 bytes (3.0 GB) copied, 26.1149 s, 115 MB/s
342934560 bytes (343 MB) copied, 2.97549 s, 115 MB/s
3701077604 bytes (3.7 GB) copied, 32.2196 s, 115 MB/s
3970880352 bytes (4.0 GB) copied, 34.6313 s, 115 MB/s
[...]
1582606336 bytes (1.6 GB) copied, 31.4373 s, 50.3 MB/s
2404657152 bytes (2.4 GB) copied, 47.9762 s, 50.1 MB/s
407907360 bytes (408 MB) copied, 8.23959 s, 49.5 MB/s
232886880 bytes (233 MB) copied, 4.73359 s, 49.2 MB/s
437116920 bytes (437 MB) copied, 9.00741 s, 48.5 MB/s
44623680 bytes (45 MB) copied, 0.931074 s, 47.9 MB/s
1081526272 bytes (1.1 GB) copied, 24.5866 s, 44.0 MB/s
77420468 bytes (77 MB) copied, 1.76331 s, 43.9 MB/s
204685752 bytes (205 MB) copied, 4.72558 s, 43.3 MB/s
62033044 bytes (62 MB) copied, 1.50899 s, 41.1 MB/s
linux$

And the final big test I did today, which just finished:

On the FreeNAS machine:
freenas# nc -l 9999 | dd bs=64k | tar xf-
119645+575795898 records in
119645+575795898 records out
1033120931840 bytes transferred in 13002.103024 secs (79457987 bytes/sec)
freenas#

On the Linux client:
linux$ tar cf- /mnt | dd bs=64k | nc 192.168.0.100 9999
7442169+35595591 records in
7442169+35595591 records out
1033120931840 bytes (1.0 TB) copied, 13001.8 s, 79.5 MB/s
linux$

So I can push the files from Linux to FreeNAS at 79.5MB/s when using tar over nc.

The Linux machine is certainly capable of reading the source files at more than 12MB/s (65-80MB/s in the for-loop with find and dd above), and it can read and push files over the network to the FreeNAS machine at way more than 12MB/s (79.5MB/s in the final tar-over-nc test above). It's when NFS gets involved that the performance tanks to 12MB/s.

Suggestions on how to tune/optimize NFS would be greatly appreciated...
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
I like all of your test results!

Let us concentrate on getting good write performance for large files over NFS first. Pick a larger test file of incompressible data (a movie file > 250 MB would be ideal).

Some observations/questions:

I believe your CPU supports Hyper-Threading (HT). Please try disabling HT in your BIOS and test again before making any other changes.

Are you using jumbo ethernet frames anywhere?

What version of Linux are you running on your client?

What brand of network card do you have in your FreeNAS server and in your Linux workstation? (We have seen problems with RealTek cards.)

I see that your NFS client rsize and wsize options are set to 16k. Can you modify those values to be a bit larger and test again (maybe start with "rsize=32768,wsize=32768") ?

Can you confirm your ZFS sync settings are set to 'standard'? Run the following from your FreeNAS shell: zfs list -o name,sync

Also, for fun, can you attach the output of running "arc_summary.py"?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok.. I've got a 10Gb link to my FreeNAS server, and its running linux too... so what's the mount command you use? I don't feel like playing games with my fstab entries. ;)

Edit: Ok, did some googling. I did the following:

Created a RAMdrive...


# mkdir /tmp/ramdisk
# chmod 777 /tmp/ramdisk
# mount -t tmpfs -o size=10240M tmpfs /tmp/ramdisk/

Also I mounted the NFS share with the command:

mount -t nfs -o defaults 192.168.2.10:/mnt/tank /mnt/tank_NFS

And I was able to move over 600MB/sec from the NFS share to my RAMdrive. That's about what I was hoping/expecting with NFS. CIFS is usually 250-350MB/sec or so. Being that CIFS is single-threaded, it hits saturation on 1 core and it fluctuates based on CPU usage from second to second.

Next I did your exact fstab entry but without noauto.

Got just 200MB/sec. So first I think you should consider trying the default NFS settings just to see what kind of performance you get. I realize there's a big difference between 200MB/sec and 12MB/sec, but you gotta start somewhere.

I was confused on what nc did.. so I went back and looked. You are right that the nc tests do more than I thought. I thought that was a local test, but it wasn't.Not sure it was as useful as other tests, but that's not really something I'm concerned about at the moment. Personally I like to see iperf and then use NFS as you intend to use it. But to each their own. In either case, it's obvious something is not quite right.

So here's what I'd try...

1. Do iperf testing from FreeNAS to your linux box. Anything less than about 850Mb/seec is not a good 1Gb link.
2. Try mounting the NFS share with the default settings to rule out your changes.
3. If the speed is significantly different, try adding your options to see which one(s) are hurting performance.
4. If you can use a RAMdrive as a source when moving file to the server that will at least rule out your hard drives as a potential problem.
5. Do you know how to determine what the default NFS settings are? Surely the defaults on my linux box(linux mint 16) aren't quite the same as yours and I'd like to see the differences. If you post your defaults I can post mine for comparison.

One other note.. 12MB/sec is almost always the sign of a 100Mb connection. I'm wondering if there was a 100Mb connection due to a lose cable or something and now it's Gb again but you haven' actually tested it. /shrug.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
cyberjock, I believe that thalf was able to get really good performance reading from FreeNAS, but saw poor performance when writing to it over NFS. Were you able to test writing to your FreeNAS server over NFS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I did a write test to my server with NFS and I got a "paltry" 290MB/sec. I will say that it copied 2GB in about 2 seconds, then was very slow for a few seconds, causing the average to drop significantly(presumably ZFS was breathing), then it finished the last 3GB in about 3 seconds. Total file size was about 9GB.

The only options I used were: nolock,noatime,nodiratime,rw,hard,intr
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Thanks for the datapoint cyberjock!

Yesterday I did some testing of NFS performance between FreeNAS 9.2.1.2 and Ubuntu 13.4 using a 623 MB file. The Ubuntu system was a virtual machine with 1 vCPU. Network connectivity was 1 Gb.

  • NFS writes from Ubuntu -> FreeNAS were CPU-bound (80-95% CPU utilization on Ubuntu) and I got around 30 MBytes/sec.
  • NFS reads from FreeNAS -> Ubuntu were faster; I measured around 70 MBytes/sec.
I plan to build a physical Ubuntu server today and repeat my tests to rule out any virtualization overhead.

Edit: Turns out the above results are completely skewed and should be ignored. VMware Workstation 10 defaults to using a poor choice of virtual NIC for Ubuntu guests. Manually changing the vNIC to an emulated e1000 or e1000e really increased the above numbers.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Ok, new interesting results today!

FreeNAS's NFS server appears to have a max rsize/wsize value of 65536. The Linux NFS client can support up to 1 MB, but if you try to mount the FreeNAS NFS share using higher values it will silently reduce to 65536 (as shown by the output of "nfsstat -m").

I installed Ubuntu 13.10 32-bit on a physical system. It has an older PCI-based Intel NIC (PRO/1000 GT).

iperf testing against my FreeNAS server using different TCP window sizes (-w option) gave the following results:

Window = 256K​
  • Ubuntu -> FreeNAS = 872 Mbits/sec (109.0 MB/sec)
  • FreeNas -> Ubuntu = 857 Mbits/sec (107.1 MB/sec)
Window = 64K (same as the NFS server max)​
  • Ubuntu -> FreeNAS = 668 Mbits/sec (83.5 MB/sec)
  • FreeNas -> Ubuntu = 636 Mbits/sec (79.5 MB/sec)
I then did some NFS file copies between clients and got the following results:

623 MB File (time cp source dest)​
  • Ubuntu -> FreeNAS = 7.9 sec (78.9 MB/s), 7.6 sec (81.9 MB/s), 7.4 sec (84.2 MB/s)
  • FreeNAS -> Ubuntu = 7.2 sec (86.5 MB/s), 7.2 sec (86.5 MB/s), 7.2 sec (86.5 MB/s)
1433.6 MB File​
  • FreeNAS -> Ubuntu = 15.1 sec (94.9 MB/s), 16.0 sec (89.6 MB/s), 15.5 sec (92.5 MB/s)
  • Ubuntu -> FreeNAS = 17.6 sec (81.5 MB/s), 17.5 sec (81.9 MB/s), 16.4 sec (87.4 MB/s)

I believe my above results show that FreeNAS's NFS performance is on par with iperf results (using a 64K window size.).

========

Side Note: I tried to repeat the above tests results using FreeBSD 10.0 32-bit on my physical system. The network performance was not nearly as good (iperf max transmit speeds were 1/2 of what I saw under Linux).

After some research I believe the reason is that the FreeBSD 'em' network driver does not enable TSO for older PCI Intel NICs. PCI-Express Intel NICs do have TSO enabled by default.

Linux appears to enable TSO even on older PCI-based Intel NICs.
 

mosquitou

Dabbler
Joined
Mar 4, 2014
Messages
11
First, thanks for the quick replies. I've been running a few more tests to give some more input, hence my delay in getting back you you guys.

I confess I'm no scientist. :) But I don't see how I took network stacks and potential performance losses from the client side out of the picture, as the nc test still reads from the client's disks and sends data over the network.

To be clear, this is what I ran on the FreeNAS machine for that test:
freenas# nc -l 9999 > /mnt/volume1/test/much_mpeg2_data.dd
freenas#

And this ran on the Linux client:
linux$ ( for n in 1 2 3 4 5 6 7 8 9 10 ; do dd if=/mnt/large_mpeg2_file bs=64k ; done ) | nc 192.168.0.100 9999
linux$

(I don't have the output any more, unfortunately.)

Since I only have 4GB RAM (and no swap) in the Linux machine, and /mnt/large_mpeg2_file is 11GB, it was not cached, it was read over and over again from disk. And dd'ing /mnt/large_mpeg2_file to /dev/null locally on the Linux machine gives me 103MB/s read speed.


The /etc/fstab entry on the Linux machine for the FreeNAS share is:
linux$ grep 192.168.0.100 /etc/fstab
192.168.0.100:/mnt/volume1/test /opt nfs noauto,vers=3,nolock,tcp,fg,intr,rsize=16384,wsize=16384,noatime,nodiratime 0 0
linux$

And the output of the "mount" command shows this:
linux$ mount | grep 192.168.0.100
192.168.0.100:/mnt/volume1/test on /opt type nfs (rw,noatime,nodiratime,vers=3,rsize=16384,wsize=16384,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.100,mountvers=3,mountport=633,mountproto=tcp,local_lock=all,addr=192.168.0.100)
linux$


Regarding cyberjock's instincts:
- Yes, the source media is spinning (2 mirrored 2TB disks using ext4).
- 597 files, 368 of those being larger than 1GB, and 140 being smaller than 100MB.
- Dunno about fragmentation, it's possible but check the read performance tests I do further down; it can't be that heavy.

The 963GB directory structure copy from Linux to FreeNAS that took ~24h was the previously mentioned 597 files.

So, to find out more, I simply did this on the Linux machine to test the read speed of the 597 files:

linux$ time ( for n in $(find /mnt -type f); do dd if=$n of=/dev/null bs=64k ; done ) 2>&1 | tee dd-outputs | grep copied
1481828220 bytes (1.5 GB) copied, 17.5272 s, 84.5 MB/s
157440224 bytes (157 MB) copied, 1.63952 s, 96.0 MB/s
1505986032 bytes (1.5 GB) copied, 18.6692 s, 80.7 MB/s
1477675112 bytes (1.5 GB) copied, 27.9347 s, 52.9 MB/s
7121713728 bytes (7.1 GB) copied, 106.994 s, 66.6 MB/s
3279250724 bytes (3.3 GB) copied, 40.3898 s, 81.2 MB/s
106132392 bytes (106 MB) copied, 1.14979 s, 92.3 MB/s
4407778252 bytes (4.4 GB) copied, 38.1292 s, 116 MB/s
2324604960 bytes (2.3 GB) copied, 25.4318 s, 91.4 MB/s
2702 bytes (2.7 kB) copied, 0.0234893 s, 115 kB/s
3657613696 bytes (3.7 GB) copied, 48.3833 s, 75.6 MB/s
2981480344 bytes (3.0 GB) copied, 36.903 s, 80.8 MB/s
2882005032 bytes (2.9 GB) copied, 37.5538 s, 76.7 MB/s
3652205312 bytes (3.7 GB) copied, 44.9742 s, 81.2 MB/s
1942237024 bytes (1.9 GB) copied, 19.5388 s, 99.4 MB/s
1286864302 bytes (1.3 GB) copied, 15.3013 s, 84.1 MB/s
2542 bytes (2.5 kB) copied, 0.0328737 s, 77.3 kB/s
1487872000 bytes (1.5 GB) copied, 15.8032 s, 94.1 MB/s
2456 bytes (2.5 kB) copied, 0.0157427 s, 156 kB/s
3370772884 bytes (3.4 GB) copied, 35.0967 s, 96.0 MB/s
[...]
3963480672 bytes (4.0 GB) copied, 53.9457 s, 73.5 MB/s
94176532 bytes (94 MB) copied, 1.0104 s, 93.2 MB/s
3774083832 bytes (3.8 GB) copied, 34.2281 s, 110 MB/s
2609 bytes (2.6 kB) copied, 0.0152736 s, 171 kB/s
real 212m40.143s
user 0m16.830s
sys 27m33.840s
linux$

It's not an exact measurement but I think it should give a pretty good idea of what read speeds to expect. Doing some math on the numbers gives the following:

Total bytes:
linux$ grep "copied" dd-outputs | cut -d' ' -f1 | awk '{s+=$1}END{print s}'
1033120488977
linux$

Time (seconds):
linux$ grep "copied" dd-outputs | cut -d' ' -f6 | awk '{s+=$1}END{print s}'
12756.6
linux$

Average speed (MB/s):
linux$ echo "$(grep "copied" dd-outputs | cut -d' ' -f1 | awk '{s+=$1}END{print s}') / $(grep "copied" dd-outputs | cut -d' ' -f6 | awk '{s+=$1}END{print s}') / (1024*1024)"| bc -l
77.23535080589116327069
linux$

Alternative calculation of average speed (MB/s):
linux$ echo $( (grep "copied" dd-outputs | grep "MB/s" | cut -d' ' -f8 | awk '{s+=$1}END{print s}' ; grep "copied" dd-outputs | grep "kB/s" | cut -d' ' -f8 | awk '{s+=$1/1024}END{print s}') | awk '{s+=$1}END{print s}' ) / $(grep "copied" dd-outputs | wc -l) | bc -l
65.67051926298157453936
linux$

And the alternative calculation without all the small files (MB/s):
linux$ echo $(grep "copied" dd-outputs | grep "MB/s" | cut -d' ' -f8 | awk '{s+=$1}END{print s}') / $(grep "copied" dd-outputs | grep "MB/s" | wc -l) | bc -l
80.63374485596707818930
linux$

(Again, I'm no scientist ;) , so I don't know why the big difference in MB/s results, but it seems the small files skews the numbers. It's been 15+ years since I last studied math or statistics...)

Highs and lows (MB/s), excluding the smaller files:
linux$ egrep "^[0-9]{6}" dd-outputs | grep "MB/s" | sort -t' ' -k +8nr
1523720192 bytes (1.5 GB) copied, 12.9614 s, 118 MB/s
3580509632 bytes (3.6 GB) copied, 30.8666 s, 116 MB/s
4407778252 bytes (4.4 GB) copied, 38.1292 s, 116 MB/s
1508525160 bytes (1.5 GB) copied, 13.1075 s, 115 MB/s
2328094720 bytes (2.3 GB) copied, 20.2837 s, 115 MB/s
2912370416 bytes (2.9 GB) copied, 25.3089 s, 115 MB/s
3007993232 bytes (3.0 GB) copied, 26.1149 s, 115 MB/s
342934560 bytes (343 MB) copied, 2.97549 s, 115 MB/s
3701077604 bytes (3.7 GB) copied, 32.2196 s, 115 MB/s
3970880352 bytes (4.0 GB) copied, 34.6313 s, 115 MB/s
[...]
1582606336 bytes (1.6 GB) copied, 31.4373 s, 50.3 MB/s
2404657152 bytes (2.4 GB) copied, 47.9762 s, 50.1 MB/s
407907360 bytes (408 MB) copied, 8.23959 s, 49.5 MB/s
232886880 bytes (233 MB) copied, 4.73359 s, 49.2 MB/s
437116920 bytes (437 MB) copied, 9.00741 s, 48.5 MB/s
44623680 bytes (45 MB) copied, 0.931074 s, 47.9 MB/s
1081526272 bytes (1.1 GB) copied, 24.5866 s, 44.0 MB/s
77420468 bytes (77 MB) copied, 1.76331 s, 43.9 MB/s
204685752 bytes (205 MB) copied, 4.72558 s, 43.3 MB/s
62033044 bytes (62 MB) copied, 1.50899 s, 41.1 MB/s
linux$

And the final big test I did today, which just finished:

On the FreeNAS machine:
freenas# nc -l 9999 | dd bs=64k | tar xf-
119645+575795898 records in
119645+575795898 records out
1033120931840 bytes transferred in 13002.103024 secs (79457987 bytes/sec)
freenas#

On the Linux client:
linux$ tar cf- /mnt | dd bs=64k | nc 192.168.0.100 9999
7442169+35595591 records in
7442169+35595591 records out
1033120931840 bytes (1.0 TB) copied, 13001.8 s, 79.5 MB/s
linux$

So I can push the files from Linux to FreeNAS at 79.5MB/s when using tar over nc.

The Linux machine is certainly capable of reading the source files at more than 12MB/s (65-80MB/s in the for-loop with find and dd above), and it can read and push files over the network to the FreeNAS machine at way more than 12MB/s (79.5MB/s in the final tar-over-nc test above). It's when NFS gets involved that the performance tanks to 12MB/s.

Suggestions on how to tune/optimize NFS would be greatly appreciated...

thalf I'm very interested to know how you resolve your problem. I'm having a quite similar one, and I'm totally lost... Thanks for help greatly.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Mosquitou:

You have your own thread with your own problem. Do not show up in a second thread and hijack it.
 

KTrain

Dabbler
Joined
Dec 29, 2013
Messages
36
Very interesting thread guys, good dialogue!
 
Status
Not open for further replies.
Top