Should I be happy with this performance?

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I've done quite a bit of reading and setup a FreeNAS server with some parts I had laying around, and parts I purchased. A quick run down :

Fractal Design Node 804 case
AMD Athlon 5350 2.05 GHz Quad-core on Asus AM1M-A (like I said, some parts I had from 2016)
8 GB RAM
Avago 9207-8i (LSI LSI00301) with IT firmware (purchased for this project) and miniSAS cabling
5 x 6 TB = 3 x 6 TB Seagate Ironwolf (new) + 2 x 6TB WD Red (EFRX60s stolen from my old Synology 2 bay) setup as a RAIDZ1
Latest version of FreeNAS on USB stick
No L2ARC, no SLOG, no ZIL drive

I used all default options in FreeNAS with SMB enabled. This is used as a backup system, but I try not to buy junk when I put stuff together in case I want to reuse parts.

The system basically has a write performance of about 80 MB/s - I see each drive doing ~20 MB/s via GUI reporting. Read performance by copying several 4GB file to my local NVMe saturates my 1Gb/s network - read is ~20MB/s per drive, so that lines up well with being about 100MB/s (no parity to deal with). Sync is set to "standard" - I tried setting it to "disabled", performance is the same.

Should I expect more than 80 MB/s write performance? I was hoping I could > 110 MB/s (near saturation of network).
 

Pitfrr

Wizard
Joined
Feb 10, 2014
Messages
1,531
Indeed you would expect to be closer to saturate a gigabit network. But...
You could see it from an other perspective: that's not so bad at all given the LAN chipset on the motherborad being a Realtek! :smile:
And FreeNAS and Realtek LAN chipsets don't work well together.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I've seen the Realtek thing mentioned many times and it is quite odd to me. Has it been PROVEN that putting an Intel NIC will all-of-a-sudden increase write performance? I ran iperf tests and easily saturate the network, though I understand this type of test doesn't mean it is not a driver problem.

Will this card to the trick?


Does anyone have a recommended PCI-E NIC?
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
So I kind of did an interesting thing. When I made my SMB share, I didn't make a dataset - I just put it under /mnt/POOLNAME/nas and called it "nas". I can't tell compression (main pool just shows 1.0), so I thought I'd create a dataset called "nas-share" and then MOVE the files at the command line. I thought this operation would be VERY quick, but it ends up treating it like new and copies all the data.

I will say, I could see compression was way up, BUT, my data transfer rate was only marginally better then over the network (~20MB read, ~24MB write per disk). I understand it has to do a read AND write, but this certainly eliminates the network. If I had let that go (I stopped it), it would have gone for MANY MANY hours.

If I copy a lot of data to a fast USB stick, attach it to the NAS, mount it, then copy data to the POOL, I assume this will show the max speed of the array outside of the network?
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
Just a running story for you all if you are interested. I cooked my nas-share-that-was-not-in-a-dataset and destroyed the entire pool. This time I put a single disk in a RAID0 and setup SMB in a proper dataset. Write speed (single very large file) is still 80 MB/s (though this shows a single drive CAN sustain 80 MB/s by itself). Added ALL the drives as a RAID0 - still 80 MB/s. Created an iSCSI volume and mounted via Windows 10 - still 80 MB/s. A pattern is emerging. :)

Not sure if the sh*t USB stick I have attached can push over 80 MB/s, but I'll copy a large file to it and try a local copy via SSH. The journey is fun and you learn at the same time...
 
Joined
Dec 29, 2014
Messages
1,135
Has it been PROVEN that putting an Intel NIC will all-of-a-sudden increase write performance?
I don't know about magic cure-all, but there is settled debate on the issue. Realtek NIC's perform poorly with sometimes variable results under FreeBSD/FreeNAS. Intel NIC's perform extremely well with no wonky behavior under FreeBSD/FreeNAS.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
Thanks Elliot - do you have a recommended PCI-E NIC I can get on Amazon/NewEgg?

BTW, copied a large file to the local USB stick that runs the FreeNAS OS, then did a timed copy of it to the pool. Calculated 22 MB/s. Going to assume the USB stick is not up to the task :)
 
Joined
Dec 29, 2014
Messages
1,135

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
Thanks Elliot, I know what you mean about the fake junk on Amazon, etc.

As for further testing - I created two zpools. Each has 2 drives in a RAID0. I dumped two large 4 GB files on the first one via SMB (~82MB/s), I then did a timed copy from one zpool to the other at the command line. 8 GB over 34 seconds is ~245 MB/s. THIS is more like what I'm expecting! So this basically leaves me with SMB or the network stack. I guess I'll be ordering a new NIC based on what I'm hearing of Realtek drivers in FreeBSD. :(

In case people are wondering why I care about streaming bandwidth versus random IO - this is NAS strictly for backup, so I value this stat most.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
UNRAID all setup, added all disks with a single parity, setup SMB - 80 MB/s (which crept down to ~30MB/s after a while). :) Think I'll have to bite the bullet on the Intel card. Have to admit, I like the flexibility of UNRAID, but I'm going back to FreeNAS.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
Ordered and installed an Intel i210-T1 port NIC. After a little frustration with the front end GUI to get the cards "swapped", I got it online. Very sad to say that I'm still seeing ~80MB/s write performance (network speed is ~650Mb/s from my PC). Any further testing I can do to see where the bottleneck is? I'm considering connecting my PC directly to the NAS to eliminate the 1Gbs switch.
 
Joined
Dec 29, 2014
Messages
1,135
network speed is ~650Mb/s from my PC
Are you saying that iperf is reporting throughput of synthetic traffic are 650Mb/s? If so, that is a definite problem on the network side. Assuming all connections are wired, I would not be satisfied with anything less than 900Mb/s via iperf.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
Had to get get iperf2 for windows (3 was acting weird against version 2 on FreeNAS) :

Code:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.51 port 5001 connected with 192.168.1.2 port 60690
[  4]  0.0-20.0 sec  1.66 GBytes   711 Mbits/sec


If I reverse this (server is windows, client is FreeNAS), I get the following :

Code:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.2 port 5001 connected with 192.168.1.51 port 50852
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-20.0 sec  2.20 GBytes   943 Mbits/sec


This actually lines up well with the issue I'm having - receive is bad, send is good.

OF NOTE - if I READ a file from the NAS to my local Windows box (NVMe drive), it push's 105 MB/s.

Ideas?
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I noticed the TCP window size is different. If I tell FreeNAS iperf to drop the scale to 64k, I got fairly bad performance. God this reminds me of troubleshoot in the old days with windows, LFNs (long fat networks), and single stream networking. The thing is, there is very little latency in this case (< .4 ms). :)
 
Joined
Dec 29, 2014
Messages
1,135
That is not surprising to get different performance when one OS uses a different window size than the other. iperf3 is available on FreeNAS, but I don't use it. I had odd/bad results when trying to test a 40G network connection. I suspect the 1G networks are less problematic with iperf3 but I decided to avoid it after that. The idea here is to break down the multiple involved components and individually test them. Given what you said, it doesn't appear that there is a problem with the network. There are numerous other threads you can search regarding pool construction, etc. You are using RAIDZ1 which isn't recommended any longer because large drives take a while to resilver, and you loose the pool if another drive fails while that is occurring. Mirrors (kind of like RAID1) give the best performance, but the least amount of effective space. My suggestion would be to rebuild a pool just using 2 of the same drive model as a mirror, and see what write performance you get.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I appreciate your comments. I'm sure RAIDZ2 is a safer choice, but I'm only running 5 disks.

Here is what I wrote about testing drive performance :

I created two zpools. Each has 2 drives in a RAID0. I dumped two large 4 GB files on the first one via SMB (~82MB/s), I then did a timed copy from one zpool to the other at the command line. 8 GB over 34 seconds is ~245 MB/s. THIS is more like what I'm expecting!

It seems each drive is very capable of writing 80 MB/s, so it's unclear to me why it won't do it in a RAIDZ1.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I copied an 800MB file to /var/tmp, then copied to the RAIDZ1 :

Code:
root@freenas[/var/tmp]# ls -lh TEST.FILE
-rw-r--r--  1 root  wheel   799M May 19 15:16 TEST.FILE
root@freenas[/var/tmp]#
root@freenas[/var/tmp]# date; cp TEST.FILE /mnt/MONSTER/nas; date
Tue May 19 15:16:56 EDT 2020
Tue May 19 15:16:59 EDT 2020


I copied this file over and over via a while loop. The GUI is showing ~75 MB/s write per disk. The pool seems very capable of writing at > 200 MB/s. This seems strictly related to the network or SMB. SFTP has the same speed limitations. This really smells like a network issue.
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I pushed it even harder. I left my goodsync software copying files from my PC to the share AND ran the continuous copy of the file from /var/tmp to the zpool share. Each drive is sustaining ~60 MB/s write via GUI. I suppose this is a little harder as there are to separate operations instead of just the one. I have since stopped the local copy and just left goodsync running - back to my standard 80MB/s write. :)
 

DurkaDurkaDurka

Dabbler
Joined
May 16, 2020
Messages
19
I have to wonder if this is something with just my Windows 10 PC. So I started doing backups on my wife's PC to the NAS while my syncs are occurring (same switch mind you). It looks like the network is now closer to saturated. The network charts are showing ~900 Mbps. Dash board is 110-115 MB/s. I'm happy with this, though I wish my single PC could max it out. I'll have to start going down the dark road of upgrading things to 10Gbps. :)
 
Top