Overall Slow Read/Write Speeds on all Share Types

Status
Not open for further replies.
Joined
May 5, 2015
Messages
5
I have been a FreeNAS user since V8.X and never experience bad performance before however I had a really small two drive setup definitely using unrecommended hardware. I was able to get around 75MB/s read and write speeds over AFP and NFS from my various hardwired devices over my gigabit network. I decided to build a new box using recommended components with six drives and RaidZ2. I am however getting read and write speeds much lower than before at around 20MB/s or lower. I have tried all share types to see if there is a performance difference and I seem to get a little bit better using CIFS from my MAC but not by much. I have scoured the forum here to see what I may be doing wrong. Here is the hardware that I have:

Supermicro X10SLL-F-0
16GB Crucial Unbuffered ECC Memory
Pentium G3220
(6) 2TB Toshiba 7200RPM SATA drives (probably not recommended)
450 Watt Gold Power Supply

I ran some dd commands that were recommended in another post to see what the actual hardware does and here are my results:

Code:
WRITE

% dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 651.469782 secs (164818362 bytes/sec)

READ

% dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out

107374182400 bytes transferred in 625.776908 secs (171585402 bytes/sec)


I have looked at the reporting in the web GUI and it seems that my CPU doesn't really max out although I know this Pentium is pretty underpowered. I figured that it had to be better than the old AMD that I had been using in my last box!

I have made sure that compression is off on my pool as I know that can hinder performance. I have also rebuilt the array a few times in different sizes to see if that impacts performance but I still get lackluster read/writes.

I have also tried autotune but that doesn't seem to do anything either. What are some suggestions that I could try? Is my CPU a bottleneck here? Thanks for any insight. I tried to do my homework before I posted.
 
Joined
May 5, 2015
Messages
5
I just tried an iperf test and it seems that my network is the bottle neck. Which seems weird to me as the cabling and router are the same as before. I have tried both onboard intel NICs as well as removing my PCI-E Intel card from my old FreeNAS box with similar results: Here are my iperf test results:

Code:
server:iperf-2.0.5-i686-apple-darwin10.5.0 michaelscarvelis$ ./iperf -c freenas.local -t 60 -i 10 -f M
------------------------------------------------------------
Client connecting to freenas.local, TCP port 5001
TCP window size: 0.13 MByte (default)
------------------------------------------------------------
[  5] local 10.0.1.10 port 59945 connected with 10.0.1.126 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   305 MBytes  30.5 MBytes/sec
[  5] 10.0-20.0 sec   214 MBytes  21.4 MBytes/sec
[  5] 20.0-30.0 sec   195 MBytes  19.5 MBytes/sec
[  5] 30.0-40.0 sec   198 MBytes  19.8 MBytes/sec
[  5] 40.0-50.0 sec   203 MBytes  20.3 MBytes/sec
[  5] 50.0-60.0 sec   194 MBytes  19.4 MBytes/sec
[  5]  0.0-60.1 sec  1308 MBytes  21.8 MBytes/sec
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Try going the other direction with the iperf test and are you using wifi?
 
Joined
May 5, 2015
Messages
5
Thanks SweetAndLow. In doing more research, I found that if I set the sysctl tunable net.inet.tcp.delayed_ack = 1 than I get between 75-90MB/s down. I don't really understand exactly what that does as my research shows that it forces tcp acknowledgements. Other posts and info show that setting to 0 typically increases performance.

Is there any way that setting this tunable could be bad for my data?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Setting tunable probably isn't what you want to be doing. Something else in your network is broken. The defaults are perfect for 95% of users and those other 5% have special workflows that they tune for. I wouldn't suggest playing with tunables they will just make things worse.

What does iperf give you for devices other than freenas?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you have to set that tunable to 1, then you have some kind of network snafu. Assuming you aren't doing something silly like trying to do iperf tests over wifi, the iperf tests validate that something is "wrong" with your network.

What the problem is, and how to fix it is a different discussion.
 
Joined
May 5, 2015
Messages
5
OK. I have tested the other way with iperf, but I still have the tunable set if that means anything. I get marginally better speeds. And yes, this is all hard-wired. My network topology in testing is my Mac Mini hardwired to my ubiquiti edgerouter-lite, to my cisco SG-200 gigabit switch, to my freenas box. I have tried multiple cables between them just to be sure. After this, I will connect my mac directly to the freenas and try a direct connection. Thanks again for your help.

Code:
Client connecting to 10.0.1.10, TCP port 5001
TCP window size: 0.03 MByte (default)
------------------------------------------------------------
[  3] local 10.0.1.126 port 42803 connected with 10.0.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   541 MBytes  54.1 MBytes/sec
[  3] 10.0-20.0 sec   522 MBytes  52.2 MBytes/sec
[  3] 20.0-30.0 sec   534 MBytes  53.4 MBytes/sec
[  3] 30.0-40.0 sec   542 MBytes  54.2 MBytes/sec
[  3] 40.0-50.0 sec   550 MBytes  55.0 MBytes/sec
[  3] 50.0-60.0 sec   554 MBytes  55.4 MBytes/sec
[  3]  0.0-60.0 sec  3244 MBytes  54.1 MBytes/sec
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, assuming you aren't doing something silly like using a Realtek NIC on the FreeNAS server (you aren't according to your first post), anything less than 100MB/sec is pretty much a "fail" for iperf.

So you definitely have something up with your networking still. ;)
 
Joined
May 5, 2015
Messages
5
Lurking on the forums as long as I have has definitely taught me to not do things silly like use Realtek NICs or test over WiFi for fear of the wrath of cyberjock and jgreco.

So, connecting directly from my computer to the server gives me proper AFAIK results in iperf with the tunable removed. Here are my results of that test:

Code:
Client connecting to 10.0.1.126, TCP port 5001
TCP window size: 0.13 MByte (default)
------------------------------------------------------------
[  4] local 10.0.1.10 port 59129 connected with 10.0.1.126 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1101 MBytes   110 MBytes/sec
[  4] 10.0-20.0 sec  1100 MBytes   110 MBytes/sec
[  4] 20.0-30.0 sec  1102 MBytes   110 MBytes/sec
[  4] 30.0-40.0 sec  1102 MBytes   110 MBytes/sec
[  4] 40.0-50.0 sec  1108 MBytes   111 MBytes/sec
[  4] 50.0-60.0 sec  1091 MBytes   109 MBytes/sec
[  4]  0.0-60.0 sec  6604 MBytes   110 MBytes/sec


I then went ahead and bypassed my router and connected directly to my switch and got similar results. So the issue lies in my router somehow and I think I know the reason. I have an Edgerouter Lite and I have bridged to two of the ports together to act as a switch. Now I understand why in the forums they say not to do that due to performance hits! So, I have not reconfigured my router to test, but I suspect that is the issue. Anyway, my network was definitely FAIL.

Thanks a lot for the assistance in helping me out. Now, back to the memtest as I already tested my drives with badblocks.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
if you ping your freenas box through your router do you have any packet loss?
 
Status
Not open for further replies.
Top