CIFS transfer @ 50-60Mbps

Status
Not open for further replies.

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Hey,
I just connected my FN the my Windows PC through a 1Gbps switch and I tried to transfer three large files (~1GB each).
The transfer rate in the Window Dialog shows 50-60 MB/s

I have 6 drives in a RaidZ2 configuration.
Shouldn't it be closer to 110MB/s?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
just to compare, transferring the same files between two PCs does reach the 100-110 limit...
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Haven't tested yet.. CIFS is single threaded?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
yep, but that thread is not close to 100%
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
just to compare, transferring the same files between two PCs does reach the 100-110 limit...

FreeNAS, especially when running ZFS does a lot more checks and balances for data integrity purposes, which in effect put more processor load when reading and transferring files. Given everything the same, a FreeNAS<->Windows transfer will take up more system resources on the server than a Windows<->Windows transfer, and if the hardware isn't up to snuff it could bottleneck in FreeNAS where it otherwise wouldn't in Windows.

For example the hardware I'm running on my FreeNAS can obtain a full 100% Gigabit transfer speed if it's running Windows to another Windows machine, but when I put FreeNAS on it I get between 40 to 70 MB/s just because transfers in FreeNAS will max out my Pentium D (doesn't help that I have a Realtek NIC either) whereas it doesn't get maxes when doing the same transfer running Windows.

What hardware are you running?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
see my signature
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
Oh yeah silly me. Hmm, yeah that should definitely be sufficient. Can you verify that the bottleneck isn't the receiving client?

What speed does iPerf give you between the two machines?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
when this client gets a file from another windows client (pc) i get 100-110...
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
Would that other Windows client be positioned logically at the same place on the network (ie through all the same switches and routers between the two)

Have you checked to see what speeds you get if you take your networking equipment out of the picture (direct connection between FreeNAS and the client in question?)
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Would that other Windows client be positioned logically at the same place on the network (ie through all the same switches and routers between the two)

Have you checked to see what speeds you get if you take your networking equipment out of the picture (direct connection between FreeNAS and the client in question?)

same thing with a direct cable.
starts at 110 and after less than a second drops to 50-60...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Ok, so what does some iperf testing say between the client and server?

Also, did you enable compression or dedup? Both of those will significantly hurt performance.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Ok, so what does some iperf testing say between the client and server?
Code:
bin/iperf.exe -c 192.168.1.10 -P 1 -i 1 -p 5001 -f k -t 10
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[156] local 192.168.1.103 port 60973 connected with 192.168.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[156]  0.0- 1.0 sec  52952 KBytes  433783 Kbits/sec
[156]  1.0- 2.0 sec  50320 KBytes  412221 Kbits/sec
[156]  2.0- 3.0 sec  52040 KBytes  426312 Kbits/sec
[156]  3.0- 4.0 sec  51824 KBytes  424542 Kbits/sec
[156]  4.0- 5.0 sec  51680 KBytes  423363 Kbits/sec
[156]  5.0- 6.0 sec  52416 KBytes  429392 Kbits/sec
[156]  6.0- 7.0 sec  51928 KBytes  425394 Kbits/sec
[156]  7.0- 8.0 sec  52336 KBytes  428737 Kbits/sec
[156]  8.0- 9.0 sec  52112 KBytes  426902 Kbits/sec
[156]  9.0-10.0 sec  51856 KBytes  424804 Kbits/sec

Also, did you enable compression or dedup? Both of those will significantly hurt performance.
compression = yes
dedup = no
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
Well that very well could be why, what mode of compression did you enable?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
lz4
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Compression can quickly kill the performance of your pool. That CPU is cheap and isn't meant to be a powerhouse.

Your iperf results say that the problem is with your network. The fact that its maxing out at less than 500 Mbit/sec means you can't do more than that, period. That test should be doing over 900Mbit/sec. So I'd start looking at your network settings on both your server and client. Note that just because your client worked with another box at 100MB/sec+ doesn't mean it isn't the problem. The server and client on a network have a relationship and they must work together.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
since this is a home media server I can turn compression off and redo the tests.
I'll report back...
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
Your iperf results say that the problem is with your network. The fact that its maxing out at less than 500 Mbit/sec means you can't do more than that, period. That test should be doing over 900Mbit/sec. So I'd start looking at your network settings on both your server and client. Note that just because your client worked with another box at 100MB/sec+ doesn't mean it isn't the problem. The server and client on a network have a relationship and they must work together.

I don't 100% trust iPerf results as an end all be all indicator of network performance. On my build my iPerf maxes out at 30MB/s, but pretty much any file transfer I perform is faster than that, generally the file transfers don't dip below 40MB/s, while iPerf won't even reach that.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
iperf has been an industry standard for testing throughput for quite a few years. I have yet to see a situation where iperf was wrong in its assessment on the forums.

The other thing is that any kind of testing has many variables that you have little or no control of. Generally, getting anything over 850Mbit/sec is considered to be "full gigabit" even though that's within 15% of what it should have gotten.

In this case, the fact it was only hitting about 1/2 of the theoretical tells me something is VERY wrong with the network setup. If it had been 850Mb/sec then I would have dismissed the network setup as the problem. Not surprisingly, the iperf test and the speeds that the OP is getting are pretty much inline with each other. Pretty much most people that complain about network performance never have real network transfer speeds that exceed iperf(usually they are exactly the same as iperf or lower because the bottleneck isn't the network). This tends to validate that iperf is a useful tool for diagnostic purposes.

@MtK: iperf doesn't use the zpool, so enabling or disabling compression has no bearing on this test. This is one of many reasons why iperf is a good benchmark for identifying potential network problems. You need to examine your network infrastructure and network settings. Maybe try replacing your network cables and double checking your network settings. You shouldn't be needing to customize the server with tunables, sysctls, or custom network settings to get amazing speeds with that hardware. I have saturaged dual Gb LAN with far less powerful hardware.
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
iperf has been an industry standard for testing throughput for quite a few years. I have yet to see a situation where iperf was wrong in its assessment on the forums.

The other thing is that any kind of testing has many variables that you have little or no control of. Generally, getting anything over 850Mbit/sec is considered to be "full gigabit" even though that's within 15% of what it should have gotten.

In this case, the fact it was only hitting about 1/2 of the theoretical tells me something is VERY wrong with the network setup. If it had been 850Mb/sec then I would have dismissed the network setup as the problem. Not surprisingly, the iperf test and the speeds that the OP is getting are pretty much inline with each other. Pretty much most people that complain about network performance never have real network transfer speeds that exceed iperf(usually they are exactly the same as iperf or lower because the bottleneck isn't the network). This tends to validate that iperf is a useful tool for diagnostic purposes.

@MtK: iperf doesn't use the zpool, so enabling or disabling compression has no bearing on this test. This is one of many reasons why iperf is a good benchmark for identifying potential network problems. You need to examine your network infrastructure and network settings. Maybe try replacing your network cables and double checking your network settings. You shouldn't be needing to customize the server with tunables, sysctls, or custom network settings to get amazing speeds with that hardware. I have saturaged dual Gb LAN with far less powerful hardware.

Ah, so having iPerf have a lower throughput than transfers isn't necessarily an abnormality?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'm sure you are wanting a yes/no answer. And the bottom line is that you can't expect a yes/no question to your answer. Takes more brains that just a blind Yes/No. It's not the end-all as you are trying to argue against(it has its own limitations, which requires you the admin to actually have knowledge about networking which sadly most people don't). But its a damn good starting point to see what is going on without relying on VERY variable and unreliable things such as "throughput of CIFS". You might not remember CIFS from the XP days, but getting over about 40MB/sec in a single link was pretty much impossible because of how CIFS worked. Luckily for all of us, CIFS was updated to the "SMB 2.0 spec" which took a protocol originally designed in the 1980s when high latency and low throughput was standard(and was designed for those exact conditions) and made it compatible with modern networks (low latency/high throughput). So yeah, using network protocols as a way to benchmark your network performance is a horrible way to do business. Just check out this link to a fairly technical article on the "complete redesign" of CIFS here. There's plenty of testing people have done with SMB1 versus SMB2. Some show a more than 10 fold increase in performance between the two. So clearly that demonstrates that CIFS should NEVER be used as a benching tool if you have that kind of fluctuations in testing just by a change in the protocol.

Actual usage on the network at one moment while doing iperf testing but different loading when you do file load testing can affect the results too. There's dozens of factors that can affect both throughput and iperf. iperf is nice because it can tell you something is wrong. In this case, because:

In this case, the fact it was only hitting about 1/2 of the theoretical tells me something is VERY wrong with the network setup. If it had been 850Mb/sec then I would have dismissed the network setup as the problem. Not surprisingly, the iperf test and the speeds that the OP is getting are pretty much inline with each other. Pretty much most people that complain about network performance never have real network transfer speeds that exceed iperf(usually they are exactly the same as iperf or lower because the bottleneck isn't the network). This tends to validate that iperf is a useful tool for diagnostic purposes.

The problem with using throughput from a file transfer is that it's behavior is slightly different than an iperf test. For one, having a pool that is limiting can give a false low(or high) value. Compression of protocols and trying to transmit large quantities of zeros can give you 1GB/sec+ despite the fact that we know better.

Additionally, iperf can give very weird bizaare results if you do things like improperly setup jumbo frames. We've seen people that would get just 100Mb/sec for 1 second, then 800Mb/sec for another second. Just because you average that out to 400Mb/sec in a network transfer is not too useful. But the bizaare iperf results are VERY useful!

So yes, I stand behind iperf as a troubleshooting and diagnostic tool. Just as most everyone else that does network troubleshooting does too. There's a reason why it's considered industry standard and compatible with alot of network diagnostic equipment, comes builtin with FreeNAS, Linux, FreeBSD, ESXi, even custom networking OSes such as monowall and pfsense. It really is an amazing tool. Interpreting the results is just as important as understanding how the test works. If you don't want to use it or rely on it, that's totally your perogative. But there's no other tool out there that benchmarks the network subsystems and no other subsystems(which is extremely important when troubleshooting and benchmarking) any better out there(and there quite possibly never will be either).

Only one OS that I know of that is widely used doesn't use iperf. That's Windows. And I won't even go there because Windows really is a bloated piece of sh*t that needs to die with fire.
 
Status
Not open for further replies.
Top