Performance Tuning Help

Status
Not open for further replies.

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
Hey Everyone,

Im sure you get alot of questions like this in here, I have done my best to follow the hardware/sizing guidelines, but still feel I'm falling short performance wise on reads/writes to my pools. Im hoping I can get some advice to help with my performance. Ill start with my specs:

Build FreeNAS-9.10.2-U3 (e1497f269)
Platform Intel(R) Xeon(R) CPU L5640 @ 2.27GHz (in Host mode)
Memory 49122MB
Pool 1 FBRow23-Z2 (8x4TB 7200RPM SAS HGST) RaidZ2
Pool 2 FBRow45-Z1 (6x3TB 5400RPM SATA Mixed) RaidZ1
Hypervisor Proxmox4.4
HBA 2x M1015 flashed IT, passed through to VM (1 Per Pool)

local rsync transfer speeds from Pool1 to Pool2 are ~40MB/s (large media files)
dd /dev/zero test to both pools is ~150 MB/s
rsync from pool 1 over network is ~60 MB/s (large media files)
Iperf:
[ 5] local 192.168.3.31 port 5001 connected with 192.168.3.11 port 31014
[ 5] 0.0-10.0 sec 2.39 GBytes 2.05 Gbits/sec

Any suggestions would be greatly appreciated, if you have suggestions for further benchmarking let me know and I will run the tests.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hey Everyone,

Im sure you get alot of questions like this in here, I have my best to follow the hardware/sizing guidelines on here, but still feel im falling short performance wise on reads/writes to my pools. Im hoping I can get some advice to help with my performance. Ill start with my specs:

Build FreeNAS-9.10.2-U3 (e1497f269)
Platform Intel(R) Xeon(R) CPU L5640 @ 2.27GHz (in Host mode)
Memory 49122MB
Pool 1 FBRow23-Z2 (8x4TB 7200RPM SAS HGST)
Pool 2 FBRow45-Z1 (6x3TB 5400RPM SATA Mixed)
Hypervisor Proxmox4.4
HBA 2x M1015 flashed IT, passed through to VM (1 Per Pool)

rsync transfer speeds from Pool1 to Pool2 are ~40MB/s
dd /dev/zero test to both pools is ~150 MB/s
rsync from pool 1 over network is ~60 MB/s
Iperf:
[ 5] local 192.168.3.31 port 5001 connected with 192.168.3.11 port 31014
[ 5] 0.0-10.0 sec 2.39 GBytes 2.05 Gbits/sec

Any suggestions would be greatly appreciated, if you have suggestions for further benchmarking let me know and I will run the tests.
Hello, and welcome to the forums!

Your iperf results lead me to believe you're running 10GbE? If so, what kind of NIC are you using?

FWIW, in my own experience w/10GbE I found that I needed to enable Jumbo frames in order to get good transfer rates.
 

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
Hello, and welcome to the forums!

Your iperf results lead me to believe you're running 10GbE? If so, what kind of NIC are you using?

FWIW, in my own experience w/10GbE I found that I needed to enable Jumbo frames in order to get good transfer rates.

Thats a bit of an anomaly actually.. that vNIC is connected to a bridge to a real 1gig NIC on the hypervisor (the iperf test client I used is connected to the same bridge). I have a secondary bridge thats virtual only for the vms to talk to one another.. which performs quite a bit faster

[ 4] local 192.168.0.31 port 5001 connected with 192.168.0.11 port 44851
[ 4] 0.0-10.0 sec 16.8 GBytes 14.4 Gbits/sec


since these results are both way higher than my local/remote transfer speeds I haven't worried too much about optimizing them, although once this hurdle is jumped I will look into jumbo frames on a dedicated physical nic for this vm.
 

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
No other suggestions?

is 40 MB local transfer between pools normal?

I would just like to make sure I am getting the proper performance from this great software.. any help is much appreciated!
 

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
I've had some weird results with certain scenarios and certain versions of rsync.

Just so I'm on the same page, are you logged in via ssh to your FreeNAS system, and then using rsync (on my FreeNAS 11 system it's rsync 3.1.2 protocol 31)?

Are the two pools using any l2arc or ZIL?

Just looking at your description of the pools I'd expect Pool 1 to be faster at reads and writes than Pool 2.

If my assumption that you're using FreeNAS's rsync(1) is correct and you're doing something like `rsync -azvP /mnt/pool1/whatever/linuxisos /mnt/pool2/whatever/` have you watched the output of `zpool iostat -v 1` while this happening to see if the write/read activity is obviously keeping one pool waiting around for writes to complete or something?

There are a lot of variables that can affect this sort of thing. My FreeNAS has a couple of pools too but I use 2 and 3-way mirrored vdevs and single-disk pools on USB3 devices so I can't really provide an example or set expectations for you. (I just rsync'ed a copy of Fletch I ripped off DVD and averaged about 10MB/sec from my internal pool to my USB 3 pool.)

Don't forget that if your filesystem uses compression, dd'ing `/dev/zero` may not be a super useful metric depending how you measure it.

I believe most people would recommend using `zfs send` and `zfs receive` for copying things between pools when possible to avoid getting too bogged down in userland like rsync can. There are a lot of variables that can affect rsync's performance and disk IO is only one of them.

I know that it's a bit tedious to explain the test case in more detail but I also suspect you can appreciate that there are a lot of ways to do what you described so being really clear will help someone either corroborate or point out opportunities for improvement.
 

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
Thank you so much for the reply!!

I am not using l2arc or zil, although I hve recently increased the ram on the FreeNAS machine to 64 gigs without any improvement. Yes I am sshing to the FreeNAS machine and rsyncing as you described. I am not using any compression, or dedup.

I will try the zfs send/recieve, as well as post the rsync/iostat here for analysis.

Thanks again!
 
Last edited by a moderator:

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
Here is the output from the rsync iostat. This is one large file being transferred... I was able to get around 50-60 MBps on this transfer, do you see anything odd with this output?

http://termbin.com/8jm0
 

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
I'm looking at your numbers now. I'm sure it's a stupid question, but what is the significance of your pool names? I was trying to google for variations on them to see if I could find out how these were likely attached physically.

But I'd still like to know how they're attached on via what means.

To be fair, I don't know if you are having a problem or not, but my macOS workstation has a mirrored zpool and for a long sustained write off an SSD that I'm rsyncing to my zpool I'm seeing it pegging 64MB/sec in bursts in `zpool iostat -v` and rsync dutifully reporting something like 32MB/sec — my activity across my disks looks much more represenative of my usage and what you posted doesn't really eyeball the same to me for whatever reason.

That is really unhelpful and I'm sorry. I don't know anything about the hypervisor you're using and that's yet another variable that will require expertise I certainly lack! I was honestly hoping for something easy like "you're saturing your controllers and you're out of bandwidth"!

I've been in your situation before and every time someone points out a variable it sounds like people are just creating busywork or something. I'm hoping someone knows about hypervisor variables or has specific flags for your zpools that would be helpful. I use pretty much defaults on FreeNAS but my macOS workstation I do pass some other args to when I created it:

Code:
cloister  checksum			   sha256				 local
cloister  compression			lz4					local
cloister  atime				  off					local


I'll think about this some more and see if I can come up with anything else. I'm a little weirded out that we don't have clear read/write between the two pools in a meaningful way.

If someone that knows dtrace rolls through here they'd probably be able to get a very specific way to measure read vs write.
 

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
Thanks for the response, the pool names are related to the 24 bay chassis that I hold the physical drives in. Basically its FrontBayRow2 and 3 RaidZ2.
 
Last edited by a moderator:

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
Thanks for the response, the pool names are related to the 24 bay chassis that I hold the physical drives in. Basically its FrontBayRow2 and 3 RaidZ2.

ESATA? SCSI? Do the disks in both pools have their own lanes to stay in?
 

brad87

Dabbler
Joined
Jan 23, 2017
Messages
27
The chassis is a norco rpc 4224 -- each row of 4 drives has an individual sas connector, leading to a port on an m1015. Each pool is using its own m1015.
 
Status
Not open for further replies.
Top