"Slow" SMB write speeds on 10GB

Status
Not open for further replies.

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
What is likely the bottle neck in this setup?

Current array is 16 NL SAS drives in Raid 0 (For testing)
Adapter is intel X520
64GB ram
pair of Xeon x5620 cpus
share is running on Freenas10, everything is default. (I did try disabling compression, no different)

All metrics are in line with expected, except writing to the array sequentially.

upload_2017-3-1_23-57-13.png
 
Last edited:

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hi there,

a SMB write speed at Approx. 200MB/s seems a bit off on your system.
You should test every component bit by bit and start with a local write onto your pool (you should be able to find the dd command syntax on the forum to test that).
And then we will see ;-)
 

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
First I tested the hardware just to be sure.
With Window installed and 15 drives in storage spaces setup as a Mirror (Raid 10) I can write to the array at fairly close to line speeds:
upload_2017-3-2_10-41-55.png
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
First I tested the hardware just to be sure.
With Window installed and 15 drives in storage spaces setup as a Mirror (Raid 10) I can write to the array at fairly close to line speeds:

Well, this is good. But it will not mean anything for the FreeNAS instance you're running now.
Please run the dd tests, to make sure that we can rule that one out.
We will then be able to move on ...
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
First I tested the hardware just to be sure.
With Window installed and 15 drives in storage spaces setup as a Mirror (Raid 10) I can write to the array at fairly close to line speeds:
View attachment 16304
I don't have 10 gigabit hardware to experiment with, but do the following:
  • Post contents of /usr/local/etc/smb4.conf
  • Try enabling AIO on the samba share by adding the following auxiliary parameter: aio write size = 1
  • Try increasing the receive buffer size for samba (a bit of voodoo that I don't typically condone): socket options = SO_RCVBUF=262144 That may be large, but experiment with different values and see if it affects performance.
Have you done any network car "tuning" (ie adding sysctls / tunables)?

Note that manually setting the send / receive buffers in Samba can cause performance degradation for some connections (like over localhost).

Also post the results of the dd test. It would be good to get a baseline of what your pool is capable of performance-wise.

What HBA or RAID card are you using?
 
Last edited:

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
DD results, moving on to SMB conf now.

This is switched back to raid 10

dd if=/dev/zero of=/mnt/R10/testfile bs=4M count=10000
2.8GB/sec 24% CPU usage.... (Opps Compression was enabled)

dd if=/dev/zero of=/mnt/R10/testfile bs=4M count=10000
1.0GB/sec 18% CPU usage (Compression off)
 

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
SMB4.conf:

[global]
config backend = registry

So does the:
aio write size = 1
line just go under the config backend line?

I haven't done any tuning so far

DD test is posted above.

Using an IBM M1015 card.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
SMB4.conf:

[global]
config backend = registry

So does the:
aio write size = 1
line just go under the config backend line?

I haven't done any tuning so far

DD test is posted above.

Using an IBM M1015 card.

aio write size = 1 would be added through the web UI under "services" -> "SMB" in the field "Auxiliary Parameters". Ditto regarding socket options.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Wouldn't iperf be a better test of network transfer rates?

Also... do you have synchronous writes turned on?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Wouldn't iperf be a better test of network transfer rates?

Also... do you have synchronous writes turned on?

+1
iperf isn't a bad idea. Adding aio write size = 1 to the samba config forces samba to use AIO for writes. Whether AIO is working properly in samba on FreeNAS is another issue, but in theory it should work.
 

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
I can't find Auxiliary Parameters on FreeNAS10, has that been replaced with something else?
I haven't changed any of the sync write settings, I assumed it was off based on the random performance, but maybe not?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
I can't find Auxiliary Parameters on FreeNAS10, has that been replaced with something else?
I haven't changed any of the sync write settings, I assumed it was off based on the random performance, but maybe not?
Oh. I didn't realize you were using FreeNAS 10. I don't believe there is a way to modify smb.4conf "auxiliary parameters" in FreeNAS 10. Have you tried comparing 10 gigabit performance in FreeNAS 10 to performance in FreeNAS 9.10? There are more knobs to fiddle with in 9.10 (with the greatly increased potential to break things).
 
Last edited:

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
I haven't tried this on 9.10 yet, I can.

Iperf when freenas is the server is only 1.55Gbits/sec
iperf when freenas is the client is 8Gbits/sec

EDITED.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
Did some more testing with Iperf:

All on the same switch,
All systems using X520
All systems connected with Twinax Direct attach SFP+ cables.

Server1 -> Server2 ~8gbit/s
Server2 -> Server1 ~8gbit/s

Server1 -> server3 (freenas) 1.5gbit/s
Server2 -> server3 (freenas) 1.5gbit/s

Server3 (freenas) -> server1 ~8gbit/s
Server3 (freenas) -> server2 ~8gbit/s
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
do you have lagg or something funny configured no freenas?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Did some more testing with Iperf:

All on the same switch,
All systems using X520
All systems connected with Twinax Direct attach SFP+ cables.

Server1 -> Server2 ~8gbit/s
Server2 -> Server1 ~8gbit/s

Server1 -> server3 (freenas) 1.5gbit/s
Server2 -> server3 (freenas) 1.5gbit/s

Server3 (freenas) -> server1 ~8gbit/s
Server3 (freenas) -> server2 ~8gbit/s
Hmmm... you may need to do some tuning...

I run FreeNAS in a VM on ESXi 6, so I don't know anything about NIC tuning on-the-metal, but @Mlovelace posted some tuning tips here: Will this gear do 10 gigabits?, perhaps this will get you started down the right path.
 

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
I installed 9.10 and get the same results.

Found this thread:
10 Gig Networking Primer

And set all the settings the same as Mlovelace.\
Results are better,
upload_2017-3-3_12-39-48.png


Shouldn't FreeNAS do this automatically for a 10Gb/s card?
It's not like this is a rare card, It is probably the most common 10Gb/s card you can buy.
 
Status
Not open for further replies.
Top