Horrifyingly low performance

Status
Not open for further replies.

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
I've just put together my FreeNas 8.3.1 box, and everything worked great. I love the interface, and I managed to get CIFS working with raidz2 with not too much hassle.

The major problem though... the read/write speeds horrify me. The WD Red 3TB drives, alone, over SATA are supposed to get somewhere around 110MB/s read speeds. I know raidz2 is double parity, and I know I'm using 5 drives instead of 6, but that still doesn't completely explain the poor performance I'm seeing: 10MB/s reads/writes over CIFS. To eliminate any possibility of the network being responsible, I transfered a large 1.5GB .avi video file to the NAS and used DD to copy data from the array to itself, getting the same speeds. I realize I should have probably been copying zeroes to avoid reading and writing at the same time, but that's still slow.

I started to think FreeNAS was responsible, so I switched briefly to Nas4free. That was painful (especially since Nas4free is rather brittle and needed to be continuously reset to factory since saving settings would periodically break) but from a Nas4free-generated raidz2 array over CIFS I got read/write speeds of 42-60MB/s. That's a lot higher!

Any ideas on how to make FreeNAS perform anything like that?

Hardware:
16GB DDR3 @ 1066Mhz
ASUS C60M1-I motherboard (AMD C-60 dual core processor)
5x WD Red 3TB
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Have you done any iperf tests and dd tests?
 

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
Have you done any iperf tests and dd tests?

I did run a dd test with actual file source data, but I should run another with pure zeroes to avoid reading during the test. I know the disks and controller are capable of more than 10MB/s because Nas4Free could pull 45, off of the exact parallel config: same drives, RAM, CPU, controller, raidz2, network, CIFS, etc.

I haven't run an iperf test because the setup is tough (although I can if it's crucial) because, likewise, Nas4Free showed the network isn't the issue, except maybe on the FreeNAS config end.

Will try to get those run tomorrow after I have the time to reconfigure the box.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I know the disks and controller are capable of more than 10MB/s because Nas4Free could pull 45, off of the exact parallel config: same drives, RAM, CPU, controller, raidz2, network, CIFS, etc.

Sure, but that doesn't prove that FreeNAS can do those same benchmarks. I can get better benchmark numbers on linux than Windows, so should I be upset and posting that Windows is slower? It's an apples to oranges comparison.

I haven't run an iperf test because the setup is tough (although I can if it's crucial) because, likewise, Nas4Free showed the network isn't the issue, except maybe on the FreeNAS config end.

Will try to get those run tomorrow after I have the time to reconfigure the box.

What you need to do is run speed tests of different parts of your system to find the bottleneck. Once the bottleneck is found, then you can start examining how to overcome those issues.
 

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
Got it, I'll run purer tests to help isolate it then, since my problem probably isn't immediately obvious and I have to narrow it down.

Main issue is I see others getting much higher speeds on this forum with comparable hardware: e.g.: In that other thread I saw Nas4Free trading blows with FreeNAS within 20%, which points to config issues.

Will have results tomorrow.
 

CAlbertson

Dabbler
Joined
Dec 13, 2012
Messages
36
What Ethernet controller are you using? The Intel gigabit NICs are the best. If you have RelTek that explains the 10MB/Sec.

OK, just looked it up your M/B has a Intel® 82579V Gigabit Ethernet Controller built-in. So that is not it.
 

hervon

Patron
Joined
Apr 23, 2012
Messages
353

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
Okay, I ran dd and iperf tests after building FreeNAS from scratch and importing the 5-drive raidz2 4096K sector ZFS volume from Nas4Free

DD:
Code:
[root@freenas] ~# cd /mnt/testpool/testset
[root@freenas] /mnt/testpool/testset# dd if=/dev/zero of=temp.dat bs=1024k count=25k
25600+0 records in
25600+0 records out
26843545600 bytes transferred in 234.747357 secs (114350789 bytes/sec)
[root@freenas] /mnt/testpool/testset#

=> 114.350789 MB/s writes! That's what I'm expecting here.

Did a second test with a windows version of iperf:
Code:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\quantumnerd\Downloads>iperf.exe -c 192.168.0.14 -p 5555
------------------------------------------------------------
Client connecting to 192.168.0.14, TCP port 5555
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[156] local 192.168.0.13 port 61028 connected with 192.168.0.14 port 5555
[ ID] Interval       Transfer     Bandwidth
[156]  0.0-10.0 sec   571 MBytes   479 Mbits/sec
C:\Users\quantumnerd\Downloads>

which is 60MB/s! Until I hit that, I'm not capped.

Did a read/write test over CIFS on the new setup and I'm getting 24MB/s write with an .avi file and ~50MB/s read. Something's weird. Going to try to see if this is an issue with FreeNAS ZFS pool creation.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Uh, you are failing to take into account that those tests are independent of each other, but in reality your machine is both accessing the zpool AND using CIFS when actually serving files. You can expect 60MB/sec if nothing else is going on with your CPU, but the truth is ZFS IS a CPU hog and so is CIFS.

You might see better numbers with an Intel NIC. Other than that, I don't have any specific recommendations. Intel NICs work well because they offload some of the network loading to your network card.
 

CAlbertson

Dabbler
Joined
Dec 13, 2012
Messages
36
Okay, I ran dd and iperf tests after building FreeNAS from scratch and importing the 5-drive raidz2 4096K sector ZFS volume from Nas4Free....

iperf measures the bandwidth but you don't know if it is the Windows machine of the FreeNAS server that is the bottle neck. You need more than two computers or you will never know. FreeNAS should be able to "flood" a Gigabit ethernet. I'd suspect the Windows PC.

Same with a "dd" test you don't know which end is the bottle neck. Why blame Freenas?

One test you can do is run multiple tests at the same time. For example I run four clients all trying to push data to the freenas server at the same time, then look at how ay bytes go across the freenas' NIC per second. It should be very close to 1Gb/sec if serving multiple clients.

Also use the "top" display on the termini window of FreeNAS to look at RAM and CPU utilization of the server while running these tests. You'd like to see the CPU never go above 50% (that is averaged over one second.)

In short the server should be able to server data at almost "wire speed", that is 100MB/sec. If not first suspect a slow client if you rule that out by using four simultaneous clients, look at CPU and RAM utilization. With your current test I can't rule out a slow client system.
 

engmsf

Dabbler
Joined
May 26, 2013
Messages
41
Hardware:
16GB DDR3 @ 1066Mhz
ASUS C60M1-I motherboard (AMD C-60 dual core processor)
5x WD Red 3TB

I have a similar setup, same MB but with 8GB ram and 3x WD Red 3TB in RaidZ1. Transferring large files from Windows 7, I am seeing 45-65 MB/s (on the windows information that pops up). I am also using the onboard Realtec NIC.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I have a similar setup, same MB but with 8GB ram and 3x WD Red 3TB in RaidZ1. Transferring large files from Windows 7, I am seeing 45-65 MB/s (on the windows information that pops up). I am also using the onboard Realtec NIC.

That's about what I'd expect for a RAIDZ1. Not sure about a RAIDZ2 which is what the OP has. :/
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
OP,

Can you do me a big favor? Can you build your ZPOOL as a RAID0, then a RAIDZ1, then a RAIDZ2 and test your zpool with the following commands for each? Quite a few people use that CPU and I'm interested in a benchmark comparison....

dd if=/dev/zero of=/mnt/zpool/testfile BS=4M count=10000

If you could also provide the CPU usage %s for each that would be fantastic(the command is top).
 

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
Same with a "dd" test you don't know which end is the bottle neck. Why blame Freenas?
I'm not blaming FreeNAS for the drives being the bottleneck. For all I know, a RAID 6/raidz2 write shouldn't be that fast, and I can't actually really use more bandwidth than gigabit LAN speed anyways-- internally, I'm getting it, since I'm past 100MB/s with dd. The issue is that I'm still pretty far under that dd speed for network writes (yes, I know overhead is a thing, but I suspect I'm seeing more overhead than I should).

One test you can do is run multiple tests at the same time. For example I run four clients all trying to push data to the freenas server at the same time, then look at how ay bytes go across the freenas' NIC per second. It should be very close to 1Gb/sec if serving multiple clients.
I will do that and post the results today or tomorrow. Have to wait for second machine to free up (and I'm pretty sure a VM does not count). Lolwindows, etc.

OP,

Can you do me a big favor? Can you build your ZPOOL as a RAID0, then a RAIDZ1, then a RAIDZ2 and test your zpool with the following commands for each? Quite a few people use that CPU and I'm interested in a benchmark comparison....

dd if=/dev/zero of=/mnt/zpool/testfile BS=4M count=10000

If you could also provide the CPU usage %s for each that would be fantastic(the command is top).

I'll try and do that-- makes sense that my config, which is absurdly cheap on the CPU/mITX board and paired with the popular drives and a popular OS might be common, so benchmarks are valuable. Anyways, question: I'm new to Linux/BSD/POSIX/etc. and admin conventions, so I'm not sure what to do with the top command. It's real time and has a lot of flags. What flags do you want me to run, and how am I supposed to capture any coherent results from a command like that? I'm pretty sure I can't just pipe it to a text file.

Second question: when you say RAID0, do you mean stripe? I assume that ZFS stripe is a software stripe, but it almost sounds like you mean motherboard RAID.

Third question: I'm a bit confused about my drives. Should I be using "Force 4096 bytes sector size"?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll try and do that-- makes sense that my config, which is absurdly cheap on the CPU/mITX board and paired with the popular drives and a popular OS might be common, so benchmarks are valuable. Anyways, question: I'm new to Linux/BSD/POSIX/etc. and admin conventions, so I'm not sure what to do with the top command. It's real time and has a lot of flags. What flags do you want me to run, and how am I supposed to capture any coherent results from a command like that? I'm pretty sure I can't just pipe it to a text file.

You can. I do it in Putty and then copy the terminal to clipboard.


Second question: when you say RAID0, do you mean stripe? I assume that ZFS stripe is a software stripe, but it almost sounds like you mean motherboard RAID.

Exactly. Stripe = RAID0 for ZFS.

Third question: I'm a bit confused about my drives. Should I be using "Force 4096 bytes sector size"?

Depends on your drives. If they have 4k sector sizes you definitely should. If not, then you can if you want to be "future" proof.

The intent of the above test is to determine the CPU loading for the varios ZFS types. I'd ask you to do RAIDZ3 if I thought it was meaningful, but you have too few disks for the results to matter.

Edit: actually.. you do have 5 disks.. so the test might be interesting to say the least.

Overall though, I'd expect you to be very lucky if you could sustain 50MB/sec with CIFS and RAIDZ1. I'd guess that at RAIDZ2 30MB/sec might be the max you will be able to sustain.
 

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
So basically, after triggering the write through the web UI/local shell, I SSH in over Putty, copy a snapshot of the top resource usage (nothing fancy like data over time), and paste that.

What flags do you want me to use?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Flags? For what? The command only needs to be changed to point to your zpool.

Not sure if you saw my edit above, but you could also try a RAIDZ3 just for testing.

I always just SSH in. The local shell is nice but copying/pasting is a PITA.

I meant a copy/paste of the dd output, TOP only really important line is the 3rd line that says "CPU:" and gives a breakdown. I guess the DD output is only 1 line too.. LOL
 

quantumnerd

Dabbler
Joined
May 26, 2013
Messages
14
Flags? For what? The command only needs to be changed to point to your zpool.

Not sure if you saw my edit above, but you could also try a RAIDZ3 just for testing.

I always just SSH in. The local shell is nice but copying/pasting is a PITA.

I meant a copy/paste of the dd output, TOP only really important line is the 3rd line that says "CPU:" and gives a breakdown. I guess the DD output is only 1 line too.. LOL

Okay, here's the summary of my dd write testing: I ran 2 tests each on stripe, raidz, raidz2, raidz3. I tried to grab the top results as late as I could but I'm not perfect and a bit lazy.

Stripe:
Load average 1 min: ~2.25?
dd CPU usage: ~58%
dd speed: 290 MB/s

Raidz:
Load average 1 min: ~6.65
dd CPU usage: ~35% (test 1's copy was grabbed during a round of python background tasks, test 2 is more representative of the normal case)
dd speed: 158 MB/s

Raidz2:
Load average 1 min: ~7 (test 1 had a peak)
Load average 5 min: ~5.15
dd CPU usage: ~21%
dd speed: 115 MB/s

Raidz3:
Load average 1 min: ~8.5?
Load average 5 min: ~6.5?
dd CPU usage: ~17%
dd speed: 81.5 MB/s

And here are the raw results

Based on what I've heard of top (although I'm not that good at interpreting the results), it looks like my two cores are being loaded kinda heavily. I'm not sure how heavy the network and CIFS computations are, but they're probably contributing to the slowdown I'm seeing. I knew FreeNAS was RAM-heavy, but I didn't think it was this CPU-heavy. What do you think? Should that be enough justification for getting 22% the drive performance over the network?

question: what should I use to figure out how much traffic is going through the NIC? Might as well confirm that I can get more than half the pipe of bandwidth, but it's not the bottleneck here or anything.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I knew FreeNAS was RAM-heavy, but I didn't think it was this CPU-heavy. What do you think? Should that be enough justification for getting 22% the drive performance over the network?

FreeNAS isn't CPU-heavy. If you were using UFS then your machine would be overpowered for the task. ZFS does use CPU resources, but only appears to be a CPU hog because you are using a low powered CPU. My 3 year old Xeon can do something over 1.2GB/sec. I know this because that's what I get during scrubs and I don't even hit 75% utilization.



question: what should I use to figure out how much traffic is going through the NIC? Might as well confirm that I can get more than half the pipe of bandwidth, but it's not the bottleneck here or anything.

You already did that with the iperf test.

The dd command is the best case scenario for writes. It allows for large contiguous writes.

The iperf is also a "best case" which you did fairly poorly in(mostly because its a Realtek card with a less than ideal CPU).

Keep in mind that Samba is single threaded and can be very CPU heavy, particularly on less powerful CPUs.

So here's my recommendations as options:

1. Stick with RAIDZ1(not my favorite option)
2. Get an Intel NIC(my favorite option)
3. Get a CPU that is more powerful(2nd favorite option)

It's hard to say what speed you'd get with the Intel NIC, so that is still somewhat of a gamble. Also keep in mind that when FreeNAS is upgraded to 9.1 your overheard will go up, so you can expect performance will probably go down with the next update. If you aren't particularly happy with your speeds now you're not likely to like it any more when 9.x is released.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Status
Not open for further replies.
Top