Slow write speed on fast hardware. SMB/CIFS bottleneck?

Status
Not open for further replies.

Miles Tudor

Dabbler
Joined
Apr 10, 2016
Messages
13
Hi,
My first post.
First off, sorry I'm a linux noob, but pretty good with Windowz systems generally (been doin it since DOS 3.1!)

Hardware:
Supermicro X10DRI-T (integrated intel x540 10GbE)
Xeon E5-1620V3
16GB DDR4-2133 ECC REG Samsung/Hynix
8 Port SATA/SAS LSI HBA
8 x 2TB Enterprise SAS
3 x 1TB Samsung 840pro
8GB SDOM for OS (FreeNAS 9.10)

Netgear XS712T 10GbE switch.
Cat7 Cables
6 x Win10 Workstations with Intel X540 nics

Backround:
I run a video production studio.
We recently moved out of a corporate contact job where all our kit was provided. Everyone worked locally on workstations with RAID0 drives and backed up to number of large NAS (when we left we were at 90TB, thankfully it stayed with the corporate client).

I purchased new workstations with smaller local data drives (1/2 TB M.2 SSD - they are awesome fast!), but am planning everyone work from server over 10GbE - there was always masses of overlap as everyone duplicated large projects.

I built a mk1 test server to evaluate OS. I tried FreeNAS, OMV, WinSvr2012 and settled on FreeNAS.

I then purchased a more professional server and am struggling to reach the performance I need.

The problem I have run into is on write speeds. Read speeds have been pretty good.
My desired use is to have a smaller fast pool - made up of the 3 x SSDs in a stripe, everyone is to work from this on a daily basis.
The 8 x 2TB spinning disks are to be in a redundant RAIDz1 or RAIDz2 pool to back up the SSD pool and take daily system images of the workstations.

No matter how I configure the drives I always seem to hit a limit of about 170MB/s on the write. (both the RAIDz HDDs and the striped SSDs)
I've trouble shooted the network (iperf), there doesn't seem to be a problem there.

Writing from workstation to workstation also gets up to speed.

As an experiment I even set a single SSD as a stripe but still hit the write limit of approx 170MB/s

I've been reading as much as I can about ZFS and understand the performance hit with RAIDz but I don't understand why I can't get the speed with striped SSDs.
I have a hunch it is something to do with CIFS/SMB.

If anyone can give me some pointers as to where to look next I would be extremely greatful.
If I have not provided the necessary information I am happy to do so (where I understand and am able!)

Many Thanks.

Miles
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I'm far from an expert Miles, just a couple of things you can look at...

You give this post a read first.

Also suggest you quadruple your RAM, with 6 WS banging on your pool,
16GB is not going to enough.

@jgreco We need your guru self please.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What sort of tuning has been done to it on the 10G side?

The X540 isn't thrilling but can probably be made to work fine.

Can I inflict either pain or pleasure? Try adding the following for tunables.

kern.ipc.soacceptqueue=256
kern.ipc.maxsockbuf=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=4194304
net.inet.tcp.recvbuf_inc=4194304
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=4194304
net.inet.tcp.sendbuf_inc=4194304

Bet it flies. Or crashes. Maybe both. Might also need to stab CIFS with some adrenaline, I don't recall if CIFS tries to manipulate the socket settings itself or not. More memory will help. Shoot for 64GB, 128GB is pricey (~$800) but a nice performance booster.
 

Miles Tudor

Dabbler
Joined
Apr 10, 2016
Messages
13
Thankyou both for your prompt replies. Is it sunday on your planet too? ;-)

I will certainly add more memory if it going to help, the board certainly has more free slots than I've ever seen!

I will also try those tunables tomorrrow and report back whether there is smoke.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thankyou both for your prompt replies. Is it sunday on your planet too? ;-)

Sun... day? What's that mean.

I will certainly add more memory if it going to help, the board certainly has more free slots than I've ever seen!

Pfft, lightweight.

http://www.supermicro.com/products/motherboard/xeon/c600/x9dr7-tf_.cfm

So you're actually running an E5-16xx on a dual board? I always figured that'd work but never seemed to line up hardware at the right time to try it...

But if you want some sweet server pron,

http://www.supermicro.com/products/motherboard/Xeon/C600/X10QBI.cfm


I will also try those tunables tomorrrow and report back whether there is smoke.

Should not be smoke. Mild possibility of crashes, deemed unlikely, just warning "it's possible", primarily because I don't know much about your network. With only six workstations I'd think it okay. It's mostly a low-memory thing.

If you've got the stomach to go a few rounds with this, you've got a great platform there to do testing, and I'm kinda hot on this topic right now, so we could try a few different things that'd go a long way to being generally helpful to all 10G users.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're better off waiting a bit to see how the performance turns out. It shouldn't be impossible to run with 16GB, just ... suboptimal.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
What sort of tuning has been done to it on the 10G side?

The X540 isn't thrilling but can probably be made to work fine.

Can I inflict either pain or pleasure? Try adding the following for tunables.

kern.ipc.soacceptqueue=256
kern.ipc.maxsockbuf=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=4194304
net.inet.tcp.recvbuf_inc=4194304
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=4194304
net.inet.tcp.sendbuf_inc=4194304

Bet it flies. Or crashes. Maybe both. Might also need to stab CIFS with some adrenaline, I don't recall if CIFS tries to manipulate the socket settings itself or not. More memory will help. Shoot for 64GB, 128GB is pricey (~$800) but a nice performance booster.
Those sysctl values seem reasonable even for 16GB of ram (only 6 clients). @Miles Tudor you should do some local write tests on the pool to see what is possible with out netwoking/protocols. When you have time go to your pool and do a "dd if=/dev/zero of=testfile bs=1048576" and then let that run for 10 minutes, hit ^C, and see how fast it writes. Make sure compression is turned off on the dataset you are writing to otherwise the results will be meaningless.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
On the striped SSD's this should work out to a very high speed anyways. The numbers are slightly conservative but as you know I've kinda been looking at the botch which is autotune with an eye towards maybe making it a little smarter, and part of that has to be the ability to pick intelligent values, not just pulled-em-outta-an-orifice type numbers.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
On the striped SSD's this should work out to a very high speed anyways. The numbers are slightly conservative but as you know I've kinda been looking at the botch which is autotune with an eye towards maybe making it a little smarter, and part of that has to be the ability to pick intelligent values, not just pulled-em-outta-an-orifice type numbers.
Yeah I follow where you're going. I like to start with a baseline when trouble shooting and knowing what the pool is capable of gives me a target to shoot for. The striped SSDs should have great performance if everything is connected and working properly.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I disagree, I think CIFS will crap out at some point well below even what two striped SATA SSD's can deliver. One of those things where I'd like to be proven wrong.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I disagree, I think CIFS will crap out at some point well below even what two striped SATA SSD's can deliver. One of those things where I'd like to be proven wrong.
CIFS definitely has overhead but you can saturate a 10Gbe network with it. NFS would be a better way to go given the filer's current specs.

Edit: I'll just go back to breaking my own stuff :)
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
*coughs something that sounds suspiciously like "put up or shut up"* :tongue:
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
*coughs something that sounds suspiciously like "put up or shut up"* :p
This is the best I could do from home. CIFS transfer to the filer in my signature from a Windows 2012 r2 server. That is full line rate 10Gbe over CIFS to freeNAS 9.10
 

Attachments

  • 10Gbe.PNG
    10Gbe.PNG
    158.1 KB · Views: 864

Miles Tudor

Dabbler
Joined
Apr 10, 2016
Messages
13
kern.ipc.soacceptqueue=256
kern.ipc.maxsockbuf=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=4194304
net.inet.tcp.recvbuf_inc=4194304
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=4194304
net.inet.tcp.sendbuf_inc=4194304

It didn't make any difference. (I added them as loader tunables, yes?)
 
Status
Not open for further replies.
Top