Rebuilt raidz2 with very poor performance

Status
Not open for further replies.

vvizard

Cadet
Joined
Dec 29, 2012
Messages
3
Hi, I just rebuilt my array. I used to have a 6-disk raidz2 (6x3TB Seagate Barracudas) which I now have rebuilt to a 8 disk array mixing the before mentioned Barracuda's with two new WD RED drives. The performance is very poor now. I get read/write speeds around 70MB/s on the array, while before I'm pretty sure I had at least 150MB/s.

All drives are connected to a LSI 8P SAS-controller (same controller as used previously)
The computer has a 3GHz i3 CPU and 16GB RAM.

I have lzjb compression turned on (think I did before)
Sector sizes are (according to zfs cache) 512k (ashift=9). Might have used the "force 4k sector sizes last time, not sure".

These speeds are not tolerable for my use. Anyone knows what might cause this? Could it be that 8 disks is an "odd" size for raidz2, and that I should either put in another disk and move to raidz3 or split into volumes? Help appreciated.

EDIT: Not sure how much 512k sectors would hamper speed? From a post I've read somewhere, it was stated that it would probably not be "visible" for the user speedwise though.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
What are you using to measure the read/write speeds? Assuming you have gigabit ethernet, the best you could do is somewhere around ~100MB/sec, with a theoretical maximum of 125MB/sec to any single client.

Now, aside from that, when I did the testing on number of drives in a vdev and sector size, I found the negative impact of either to be negligible. Certainly having multiple vdevs will help speed somewhat, but nowhere near that 100% increase you're expecting -- more like 10-15%. That disk rule of thumb is total drives minus parity, so adding another drive will only make a difference if you stay at z2.

I guess the other thing you could investigate is your drives. z2 throughput will be limited to the speed of your slowest drive. I assume the new drives are as fast as the older Seagates, but if they aren't, that is likely the source of the problem.

Do you have any further information on your previous configuration -- any tunables or other adjustments you might have made?
 

vvizard

Cadet
Joined
Dec 29, 2012
Messages
3
What are you using to measure the read/write speeds? Assuming you have gigabit ethernet, the best you could do is somewhere around ~100MB/sec, with a theoretical maximum of 125MB/sec to any single client.

Now, aside from that, when I did the testing on number of drives in a vdev and sector size, I found the negative impact of either to be negligible. Certainly having multiple vdevs will help speed somewhat, but nowhere near that 100% increase you're expecting -- more like 10-15%. That disk rule of thumb is total drives minus parity, so adding another drive will only make a difference if you stay at z2.

I guess the other thing you could investigate is your drives. z2 throughput will be limited to the speed of your slowest drive. I assume the new drives are as fast as the older Seagates, but if they aren't, that is likely the source of the problem.

Do you have any further information on your previous configuration -- any tunables or other adjustments you might have made?

My NAS performance (over network) have previously been measured (to ~95MB/s) mostly by the clients "file transfer dialog", but it was also verified with iperf when the array was newly built. Never ever have I seen speeds drop down below 90MB/s for large sequential read/writes to/from the NAS. Now the network speed is ~ 60MB/s.

The raw throughput of the disks in the array have been measured with "dd" from another drive connected for the testing purpose as I felt the speeds where awful when I started to copy in my backup to the newly built array (from locally attached backup drives). These speeds where only ~ 70MB/s with CPU only running at 40%, and plenty of RAM available.

So I found the ZFS best practice thing about number of drives, and suspected maybe that might be the reason. I have not reinstalled FreeNAS, so tunables etc are exactly the same as before. Afaik I've never touched them so they should be "vanilla" I guess. They are:


vfs.zfs.arc_max 10611113748
vm.kmem_size_max 14737657984
vm.kmem_size 11790126387
kern.ipc.nmbclusters 10000
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
Well, if the array is empty, you could test the impact on array size, blocks etc on your configuration. Maybe they make a difference for you. Even rebuild the initial configuration (with and without the WD drives) to see what performance it yields.

If you're looking to compare raw performance for benchmarking purposes, ZFSGuru makes this easy enough. Look at this entry from a thread I had on performance tuning: http://forums.freenas.org/showthrea...et-of-Benchmarks&p=44113&viewfull=1#post44113
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Few things...

1. Enabling autotune can give you some performance gains if you have 16GB of RAM.
2. What does a dd test tell you the speed is? Search the forum for the thread for dd to run tests on your machine. You didn't mention exact speeds...
3. You have compression on. You shouldn't expect amazing speeds if you have compression on. Disabling it won't solve your problems with files already on the zpool since those will still be compressed. Also, if you do a dd with compression on and your source was /dev/zero you will get 2GB/sec+ because zeros compress very fast and very efficiently.
4. If you don't fully understand what a tunable does and the negative side effects it has and how those might be bad you shouldn't be using the tunable. More often then not I see people adding tunables that only make things worse. One person I helped I ditched his tunables and enabled autotune and that was ALL he needed.

I will tell you that most people consider anything over 60MB/sec to be good speeds(Atoms of course are much lower). Are you sure you don't have a drive starting to flake out? A failing disk can completely kill performance. Have you run any SMART tests?

Also, how full is your zpool? Once you hit 80% full the ZFS code changes the way it allocates files on the drive as well as the zpool gets slower because there is less disk free space available.
 

uutzinger

Dabbler
Joined
Nov 27, 2011
Messages
43
You should conduct the performance test as described in the sticky thread on top of this discussion group.

It is likely that your raw disk performance is much larger than the 70Mbytes/s or larger than the 150Mbytes/s. For a single disk, 70MB/s sustained might be an ok number. You should measure your overall performance by opening the shell in the GUI and using the explanations in the link above.

Depending on combinations of NICs, switch and filesize its not uncommon to be stuck at 600Mbit/s and with tuning of system variables you can reach 800Mbit/s.

Also you might want to check "zpool status" in the shell making sure your disks are not re-silvering or scrubbing when you run the tests. I am not sure about SMART and whether a self test can affect performance but you might want to disable SMART on the drives for the performance tests.
 
Status
Not open for further replies.
Top