Performance issues with 9.2

Status
Not open for further replies.

ian351c

Patron
Joined
Oct 20, 2011
Messages
219
Hi All,

I recently did an upgrade to my FreeNAS box, which has been performing great for the last few years. I have lately noticed that performance seemed to be slower than I was used to and upon upgrading to 9.2 this morning, performance is in the toilet. I've been using FreeNAS for years now (so I'm not exactly a noob, though I don't claim to understand all the nuances of ZFS, if such a claim is even possible).

Old System:
Core i3 2500S
8GB RAM
3Ware 9650 (all drives in Single mode, write cache enabled)
5x 2TB drives
A single ZFS RAIDZ1 dev/vol

New system:
Core i3 2500S (same as the old one)
16GB RAM
3Ware 9650 (same controller) (all drives in Single mode, write cache enabled)
6x 3TB drives (3x Hitachi and 3x WD)
A single ZFS RAIDZ2 dev/vol with Compression turned on

I've also started doing scheduled snapshots, which I wasn't doing before. Every 4 hours with a 6 week lifetime.

Before the hardware upgrade, running FreeNAS 8.x and 9.0/9.1 I was seeing upwards of 120GB/s read and write performance, which meant I was capable of saturating the network link, which is plenty good enough for me.
After the hardware upgrade, running FreeNAS 9.1, I was seeing 60MB/s read and write performance both across the network and locally using dd. Not horrible, but not what I was expecting.
After upgrading to FreeNAS 9.2 performance is now at 2.5MB/s write and about 25MB/s read, which is unusable.

gstat looks like this whenever any reading or writing is done:
Code:
dT: 2.116s  w: 2.000s  filter: da.$
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    1     13      0      0    0.0     10    100   90.3   93.0| da0
    1     13      0      0    0.0      9     85   99.5  106.5| da1
    0     15      0      0    0.0     11     85   83.2  103.3| da2
    0     15      0      0    0.0     11     85   83.0  102.8| da3
    0     15      0      0    0.0     11     91   82.8  101.4| da4
    1     13      0      0    0.0     10    100   90.4   94.8| da5


These are the parameters autotune has enabled:
Code:
kern.ipc.maxsockbuf  2097152
net.inet.tcp.recvbuf_max 2097152
net.inet.tcp.sendbuf_max  2097152
 
vfs.zfs.arc_max  10571784283
vm.kmem_size  11746426982
vm.kmem_size_max  14683033728


I've searched the forums and haven't come up with anything exactly like my problem, though I've seen a few threads that suggest a lot of RAM with a relatively small pool may be a problem (though my issues aren't "bursty" at all). And I've seen various opinions on using a RAID controller with ZFS (though this has worked well for me over the years).

I'm hoping someone can point out a tuneable parameter or something to try to get performance back to where it was on this system.

Any help is appreciated.

Thanks!
 

ian351c

Patron
Joined
Oct 20, 2011
Messages
219
Figured it out. Apparently somewhere along the line I did this:

Code:
zfs set sync=always tank


Don't do that. It's bad for performance. The proper setting to "enable" ZIL is:

Code:
zfs set sync=standard
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Don't take this the wrong way, but I didn't respond to your post when i read it because I just knew you'd done some tweak that ruined the pool's performance. And finding that tweak is next to impossible. This is why I continually harp on people not to touch knobs and switches if you don't know what they do. The default settings are very well thought out and work for almost everyone!
 

ian351c

Patron
Joined
Oct 20, 2011
Messages
219
I've seen you a lot on my searches through the forums, so I'll _try_ not to take you the wrong way. :tongue: The funny thing is that the whole reason I started down this rabbit hole is because I noticed NFS performance was off. I shot myself in the foot with the zfs sync setting, but after i backed that out, NFS performance is back to being able to saturate the gigabit interface on the server. I'm calling it a win at this point and am happy that I learned some interesting stuff about ZFS, a few new commands (gstat rocks) and am now on the latest FreeNAS.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, several people have had intermittent problems that go away, but since they are busy tweaking the system they don't realize that the problem went away and wrongly attribute it to their tweaks. I can't tell you how many times the problem was directly related to them tweaking settings. Once you start tweaking things it's really hard to find what you tweaked.. especially if you do it from the CLI.

There's literally 100s of possible ways to kill performance, and only a few combinations that will yield good results. For most people(even myself) its very hard to tweak things and make it work. I'm sure if you searched the forums you'll see some people spending months trying to fix performance.

Another reason to avoid tweaking is because FreeNAS' defaults between versions often change to help improve performance. If you start forcing your own values those changes will never occur. So later on your settings might not be optimal for the current version of FreeNAS but you've been hurting performance because you probably set your tweaks and then just left them there.
 
Status
Not open for further replies.
Top