FreeNAS read/write performance

Status
Not open for further replies.

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Hey guys hope all is well!

I'm running a virtualized FN 9.10-U3 on ESXi 6.5. Everything is set up according to generally accepted best practices as far as reservations go from the hypervisor perspective (CPU shares = high, memory is reserved and shares are high.)

Spec wise -
VM Specs: 2vCPUs (xeon D1541), 8GB RAM, 8GB vmdk that host FN and system vol.
RAIDZ1 data vol: LSI 9207-8i (in passthrough mode) with 4 x 6TB Toshiba X300 7200RPM drives with ECR set to 7sec at boot.

PROBLEM: When I try to read or write via CIFS mount (haven't tried other methods) I get about 90-100MB/sec from this data vol. That's the speed I get when i read or write a 30GB file; nother else is using that storage at the time. It also jumps between those numbers a lot. Other storage that sits on the same switch does consistent 112MB/sec w/o hickups and saturates a 1gps switch during the same file read/write. From looking at Reports in the GUI, I don't see anything that looks pegged, neither from ESX side.

Does anyone have any suggestions on what else to look at?

Thanks in advance.

~Rod.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Huh? you are getting about 1gbps speeds with your file transfer. Did you expect to get something different?

For testing network performance you should use iperf from the client to the server. And make sure to test in both directions because the client could have some funky stuff happening.
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Yes - expecting to get what I get transferring between other clients on the same network.... about 112MB/sec.... Is that unreasonable?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yes - expecting to get what I get transferring between other clients on the same network.... about 112MB/sec.... Is that unreasonable?
What does iperf tell you?

You are in within the percentage of error if you ask me. Normally if you see 90MB/s or more you are maximize your throughout.

Sent from my Nexus 5X using Tapatalk
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
What does iperf tell you?

You are in within the percentage of error if you ask me. Normally if you see 90MB/s or more you are maximize your throughout.

Sent from my Nexus 5X using Tapatalk


Thanks, you were right on the fact that it's really a network issue, not storage issue.

Iperf2 test from win7 (physical) to FN on ESX1 has:

Code:
Client connecting to 10.0.100.91, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57631 connected with 10.0.100.91 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   106 MBytes   888 Mbits/sec
[  3]  1.0- 2.0 sec  94.2 MBytes   791 Mbits/sec
[  3]  2.0- 3.0 sec  89.8 MBytes   753 Mbits/sec
[  3]  3.0- 4.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  4.0- 5.0 sec  90.5 MBytes   759 Mbits/sec
[  3]  5.0- 6.0 sec  90.9 MBytes   762 Mbits/sec
[  3]  6.0- 7.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  7.0- 8.0 sec  91.0 MBytes   763 Mbits/sec
[  3]  8.0- 9.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  9.0-10.0 sec  89.2 MBytes   749 Mbits/sec
[  3]  0.0-10.0 sec   922 MBytes   774 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



iperf2 test from win7 (physical) to win2k16 VM, that sits on the the same ESX host has:

Code:
c:\tools\iperf-2.0.8b-win64>iperf.exe -c srv-backup1 -p 5001 -i 1
------------------------------------------------------------
Client connecting to srv-backup1, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57658 connected with 10.0.100.13 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   112 MBytes   943 Mbits/sec
[  3]  1.0- 2.0 sec   112 MBytes   942 Mbits/sec
[  3]  2.0- 3.0 sec   112 MBytes   943 Mbits/sec
[  3]  3.0- 4.0 sec   112 MBytes   942 Mbits/sec
[  3]  4.0- 5.0 sec   112 MBytes   943 Mbits/sec
[  3]  5.0- 6.0 sec   112 MBytes   942 Mbits/sec
[  3]  6.0- 7.0 sec   112 MBytes   943 Mbits/sec
[  3]  7.0- 8.0 sec   112 MBytes   942 Mbits/sec
[  3]  8.0- 9.0 sec   112 MBytes   943 Mbits/sec
[  3]  9.0-10.0 sec   112 MBytes   942 Mbits/sec
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



I've tried using both, vmxnet3 and E1000 driver - the results are the same. I spun up an out of the box vanila FN 9.10-U2 VM just to make sure it's not any of my settings that are causing this, and iperf stats are the same. Do you know if anything needs to be tuned on the FN side to get to 940 Mbit/sec mark?

Thanks,

~Rod.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Thanks, you were right on the fact that it's really a network issue, not storage issue.

Iperf2 test from win7 (physical) to FN on ESX1 has:

Code:
Client connecting to 10.0.100.91, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57631 connected with 10.0.100.91 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   106 MBytes   888 Mbits/sec
[  3]  1.0- 2.0 sec  94.2 MBytes   791 Mbits/sec
[  3]  2.0- 3.0 sec  89.8 MBytes   753 Mbits/sec
[  3]  3.0- 4.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  4.0- 5.0 sec  90.5 MBytes   759 Mbits/sec
[  3]  5.0- 6.0 sec  90.9 MBytes   762 Mbits/sec
[  3]  6.0- 7.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  7.0- 8.0 sec  91.0 MBytes   763 Mbits/sec
[  3]  8.0- 9.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  9.0-10.0 sec  89.2 MBytes   749 Mbits/sec
[  3]  0.0-10.0 sec   922 MBytes   774 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



iperf2 test from win7 (physical) to win2k16 VM, that sits on the the same ESX host has:

Code:
c:\tools\iperf-2.0.8b-win64>iperf.exe -c srv-backup1 -p 5001 -i 1
------------------------------------------------------------
Client connecting to srv-backup1, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57658 connected with 10.0.100.13 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   112 MBytes   943 Mbits/sec
[  3]  1.0- 2.0 sec   112 MBytes   942 Mbits/sec
[  3]  2.0- 3.0 sec   112 MBytes   943 Mbits/sec
[  3]  3.0- 4.0 sec   112 MBytes   942 Mbits/sec
[  3]  4.0- 5.0 sec   112 MBytes   943 Mbits/sec
[  3]  5.0- 6.0 sec   112 MBytes   942 Mbits/sec
[  3]  6.0- 7.0 sec   112 MBytes   943 Mbits/sec
[  3]  7.0- 8.0 sec   112 MBytes   942 Mbits/sec
[  3]  8.0- 9.0 sec   112 MBytes   943 Mbits/sec
[  3]  9.0-10.0 sec   112 MBytes   942 Mbits/sec
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



I've tried using both, vmxnet3 and E1000 driver - the results are the same. I spun up an out of the box vanila FN 9.10-U2 VM just to make sure it's not any of my settings that are causing this, and iperf stats are the same. Do you know if anything needs to be tuned on the FN side to get to 940 Mbit/sec mark?

Thanks,

~Rod.
Not really sure, that's a VMware question. Can you pass a physical nic through and see what performance is? This would really narrow it down to the VMware drive interacting with freenas.

Sent from my Nexus 5X using Tapatalk
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Not really sure, that's a VMware question. Can you pass a physical nic through and see what performance is? This would really narrow it down to the VMware drive interacting with freenas.

Sent from my Nexus 5X using Tapatalk

can't do a passthrough on a NIC at this point. However I tried iperf on a CentOS 7, and iperf runs the same as win2k16 example, i.e. 940+mbit/sec.... hmmm

Don't get me wrong - I totally agree with you, it's a VM issue - something FreeNAS doesn't like about this host. I'm trying to install FN11 as a test to see what iperf will say there.... will update shortly.
 
Last edited:

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
UPDATE: I installed FN11. iperf seems to do better with it - I get about 800mbit/sec, but still not 940+ like everywhere else. The results are the same if i try VMXNET3 or E1000 which makes me think it's something else in the OS?

On a side note - I also run pfsense on the same ESX host with the same adapters and it get's 940+ mbit/sec with it's FreeBSD 10.3 base..... Just wanted to through that in, because i initially thought well, maybe it's the FreeBSD that doesn't like VMware drivers, but apparently that's not the case....

All VMs run vmware tools, whichever version ships with the distro.

Any suggestions are welcome,

Thanks!
 
Status
Not open for further replies.
Top