Where is my bottleneck?

Joined
Apr 30, 2019
Messages
4
My FreeNAS is insanely slow. This is my first NAS so I'm not even sure what to expect for transfer speeds. I originally built it with 3x 8TB's in RAIDZ1. That was around 20 MB/s maybe? I recently added 3x 500GB SSD's in a new pool/vdev, and it's faster, but not significantly. Somewhere around 40 MB/s.

Honestly I don't even really need the capacity on the hard disk array. I'm not a data hoarder. I'm mostly just using it as a central file server that all of my laptops and PCs around the house can get to, and so that I can do distro hopping. I need speed more than I need capacity, basically.


Motherboard: Supermicro X8DTL-iF (dual LGA-1366, Intel 5500, ICH10R + IOH-24D, Intel 82574L gigabit Ethernet). Only supports SATA II.
CPUs: 2 x Xeon E5520 (4-core / 8 threads, 2.26 GHz)
RAM: 96GB (6x16GB) DDR3 PC3-8500R 4Rx4 ECC
LSI 9201-8i HBA in IT mode with latest firmware
Case: Rosewill 4U with 12x hot-swap bays
Hard drives: 3x 8GB (2 are shucked WD Elements / HGST, and 1 is a WD Red), ZFS RAIDZ1
SSDs: 3x 512GB SATA III (1 Silicon Power, 1 Samsung EVO 860, and 1 Pioneer APS-SL3N-512), ZFS RAIDZ1
FreeNAS: 11.2-U2
Shares to clients via SMB

I think the 8TB pool is going right to the Supermicro and the 500GB pool is going to the LSI card.

When I use iperf, I can get near perfect speeds between the FreeNAS server and a client linux PC.. 850+ MB/s if I recall correctly. I'm happy with that. It's not a network bottleneck.

So this is probably a dumb question, but where's my bottleneck? What would be the best use of my money for upgrades?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Run your iperf test going both directions and post the results. Freenas should be the sever for 1 test then the client on the next test.

After that try running a dd test over your smb mount.
 
Joined
Apr 30, 2019
Messages
4
Freenas should be the sever for 1 test

Code:
flyinglotus1983@nuc:~$ iperf -c 192.168.1.5
------------------------------------------------------------
Client connecting to 192.168.1.5, TCP port 5001
TCP window size:  578 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.46 port 43514 connected with 192.168.1.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   988 MBytes   828 Mbits/sec 


then the client on the next test.

Code:
root@freenas[~]# iperf -c 192.168.1.46
------------------------------------------------------------
Client connecting to 192.168.1.46, TCP port 5001
TCP window size:  761 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.5 port 12947 connected with 192.168.1.46 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   943 Mbits/sec 


Someone on the Freenas subreddit pointed out that I said 850 MB/s in my post, but that was a mistake, I had my units mixed up. It's more like 850-950 Mbits/sec. If you convert megabits/sec to megabytes/sec, it's decent for gigabyte ethernet, but what I'm looking for, performance-wise. Which has me wondering whether I should look into 10 gigabit fiber or copper. I'd just like to eliminate any other possibilities or issues before upgrading because I don't think 10 GB/E will be cheap.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
I wouldn't bother with looking into 10Gb networking (unless you want to for fun). Your network transfer speeds are fine for 1Gb. It's the pool speed that isn't working for you. We need more details of how you are testing the pool speeds, like the dd test suggested above.
 
Joined
Apr 30, 2019
Messages
4
I'm using dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct for the tests below.

Client SSD, no Freenas:
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.75576 s, 286 MB/s

FreeNAS SSD pool to client:
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.6656 s, 92.0 MB/s
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.6231 s, 92.4 MB/s

FreeNAS 8TB pool to client:
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.3654 s, 94.5 MB/s
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.4403 s, 93.9 MB/s

I didn't bother to test read speeds because I figure they'll be cached and misleading.. but happy to take suggestions if there's a way to do it.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
You can increase the size of your test so it doesn't fit in ARC:

dd if=/dev/zero of=test.dat bs=2048k count=50000
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Make sure to disable compression when using /dev/zero ...
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I'm using dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct for the tests below.

Client SSD, no Freenas:
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.75576 s, 286 MB/s

FreeNAS SSD pool to client:
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.6656 s, 92.0 MB/s
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.6231 s, 92.4 MB/s

FreeNAS 8TB pool to client:
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.3654 s, 94.5 MB/s
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.4403 s, 93.9 MB/s

I didn't bother to test read speeds because I figure they'll be cached and misleading.. but happy to take suggestions if there's a way to do it.
You got the right idea. So something with 100GB file and tests both reading a writing. Also why did you way you are going from pool to client? All your dad tests should be happening from your client to sever.

Your speeds look fine to me. Not sure why you said you got 40MB/s in your original post. You clearly are not.

Also don't use the oflag flag, increase bs=1MB and use a dateset that has compression turned off.

You can also run this dd test local in freenas to get the max sequential io your pool can do.
 
Top