Hey guys,
for the last couple of days, I've been trying to get a FreeNAS system with some ZFS storage up and running. Unfortunately I'm experiencing some mean performance issues on ZFS. Actually I wouldn't really call it a performance issue, it's more like a serious problem:
Initial performance over the network through AFP or rsync is about 100 - 115 MB/s - Nice. Unfortunately after a while the performance decreases rapidly. The file transfer seems to get stuck for a few seconds and I can hear the harddisks running high. After a couple of seconds the transfer resumes at a much lower speed, builds up again and gets stuck again. After a while the performance doesn't increase anymore, but remains at about 15MB/s - crap.
I tried local file transfer from a single SATA drive to the ZFS using rsync - same issue! So it's not a network problem.
But let me give you some details about what I'm trying to do and what steps I took trying to solve this.
The goal is to set up a FreeNAS system with 16 2TB drives configured as 1 ZFS Pool consisting of two raidz (8 disks each).
The used HW looks like this:
- Mainboard Gigabyte GA-EX58-UD5
- Intel Core i7 2,66 Ghz
- 6GB RAM
- 3ware 9650SE-12ML
- 16x Samsung 2TB HDDs
- 2GB Scandisk Cruzer Flashdrive
- FreeNAS 8.0.2 64-bit
- Drive cache on the 3ware is enabled!
So here some of the steps I took:
- set up 8 disk raidz on 3ware controller (single) - no change
- set up 8 disk raidz on 3ware controller (JBOD) - no change
- set up 4 disk raidz on onboard controller - no change
- removed 3ware completely and set up 4 disk raidz on onboard controller - no change
- installed 18GB of RAM - no change
- replaced the Mainboard - no change
- configured 8 disk RAID5 UFS on 3ware Controller - no issues, write speed is constantly well above 100MB/s.
- tried some ZFS tuning - no change.
- tried FreeNAS 8.0.3 RC2 - no change
When I had the 18GB of RAM installed, I noticed that it took longer for the issue to appear. So I tried again and kept an eye on the memory consumption. It seems that the trouble starts as soon as the system has all available memory reserved.
I'm running out of ideas what else to try to solve this. The only thing I can think of would be a caching SSD. I'm gonna try that tomorrow.
If any of you guys have an idea what could cause this and what I could do to solve it, please help me out here!
I would really like to have this set up on ZFS, but the way it looks right now, I might have to settle for a "good old" 12-disk RAID6 Hardware Raid. :(
Any suggestions would be really appreciated.
Cheers,
Ice
EDIT:
- added a 120GB SSD drive as cache to the pool - no change
- configured a 4+1 raidz - no change
Well, I'm officially out of ideas now. I think it must be an issue with the hardware, but I just can't figure it out.
I already replaced the MB, RAM, CPU, Controller. The disks are running perfectly fine as a hw raid. Damn, this is driving me crazy.
EDIT:
- tried FreeNAS 0.7.5.8854 - no change
for the last couple of days, I've been trying to get a FreeNAS system with some ZFS storage up and running. Unfortunately I'm experiencing some mean performance issues on ZFS. Actually I wouldn't really call it a performance issue, it's more like a serious problem:
Initial performance over the network through AFP or rsync is about 100 - 115 MB/s - Nice. Unfortunately after a while the performance decreases rapidly. The file transfer seems to get stuck for a few seconds and I can hear the harddisks running high. After a couple of seconds the transfer resumes at a much lower speed, builds up again and gets stuck again. After a while the performance doesn't increase anymore, but remains at about 15MB/s - crap.
I tried local file transfer from a single SATA drive to the ZFS using rsync - same issue! So it's not a network problem.
But let me give you some details about what I'm trying to do and what steps I took trying to solve this.
The goal is to set up a FreeNAS system with 16 2TB drives configured as 1 ZFS Pool consisting of two raidz (8 disks each).
The used HW looks like this:
- Mainboard Gigabyte GA-EX58-UD5
- Intel Core i7 2,66 Ghz
- 6GB RAM
- 3ware 9650SE-12ML
- 16x Samsung 2TB HDDs
- 2GB Scandisk Cruzer Flashdrive
- FreeNAS 8.0.2 64-bit
- Drive cache on the 3ware is enabled!
So here some of the steps I took:
- set up 8 disk raidz on 3ware controller (single) - no change
- set up 8 disk raidz on 3ware controller (JBOD) - no change
- set up 4 disk raidz on onboard controller - no change
- removed 3ware completely and set up 4 disk raidz on onboard controller - no change
- installed 18GB of RAM - no change
- replaced the Mainboard - no change
- configured 8 disk RAID5 UFS on 3ware Controller - no issues, write speed is constantly well above 100MB/s.
- tried some ZFS tuning - no change.
- tried FreeNAS 8.0.3 RC2 - no change
When I had the 18GB of RAM installed, I noticed that it took longer for the issue to appear. So I tried again and kept an eye on the memory consumption. It seems that the trouble starts as soon as the system has all available memory reserved.
I'm running out of ideas what else to try to solve this. The only thing I can think of would be a caching SSD. I'm gonna try that tomorrow.
If any of you guys have an idea what could cause this and what I could do to solve it, please help me out here!
I would really like to have this set up on ZFS, but the way it looks right now, I might have to settle for a "good old" 12-disk RAID6 Hardware Raid. :(
Any suggestions would be really appreciated.
Cheers,
Ice
EDIT:
- added a 120GB SSD drive as cache to the pool - no change
- configured a 4+1 raidz - no change
Well, I'm officially out of ideas now. I think it must be an issue with the hardware, but I just can't figure it out.
I already replaced the MB, RAM, CPU, Controller. The disks are running perfectly fine as a hw raid. Damn, this is driving me crazy.
EDIT:
- tried FreeNAS 0.7.5.8854 - no change