praecorloth
Contributor
- Joined
- Jun 2, 2011
- Messages
- 159
Alright folks, so I was toying around with my FreeNAS 8.0.2 box and I encountered a problem where when I would copy data via samba to the ZFS share, the machine would kernel panic after about 30 gigabytes. This is my story.
The idea behind this thread is that FreeNAS (and other ZFS vendors I believe) recommend 4GB of memory and x86-64 processors when using ZFS. Well the thing is if you're like me, you just want a NAS to fill a few, fairly basic needs. A full on 64bit processor and 4GB of memory starts to sound like an actual system rather than an appliance. Here's my set up.
Pentium 4 3.2GHz
1.5GB DDR2 (hodge podge of different speeds. 2x 512MB and 2x 256MB)
3x 500GB SATA drives in RAID-Z
2x 320GB SATA drives mirrored
FreeNAS 8.0.2-RELEASE
The situation
So what happened was after creating my RAID-Z and samba share, I would start copying my data over to FreeNAS from my backup drive. After about 30GB had transferred, I would get the kernel panic regarding vm.kmem_size. There is also a message that pops up frequently on FreeNAS boxes complaining about vm.kmem_size and vm.kmem_size_max. I will try to dig up the exact kernel panic message when I get home.
The research
So like any of us would do, I immediately hit up the University of Google and come across a number of posts talking about modifying /boot/loader.conf. The solutions said that adding the following lines would clear up problems.
I also came across the ZFS Tuning Guide, which I believe someone in the LKF pointed me to a number of months ago as well. The Tuning Guide has a lot of good information in it.
The solution
Adding the two kmem_size options to loader.conf did not fix the problem. I did some more reading in the Tuning Guide and came across this piece.
So it struck me that I might need to put in the two vfs.zfs pieces in to loader.conf as well. Here's what I came up with.
Given everything I tried, I wouldn't mind anyone saying I just picked those numbers out of thin air. That's pretty much what I did. I saw the example configuration was based off of 768MB of memory, I have 1.5GB. So I'm guessing my vm.kmem_size and kmem_size_max can be larger. But likely not larger than 512MB since according to the Tuning Guide, that could require a kernel rebuild to actually work. So 512MB was the safe maximum.
vfs.zfs.arc_max and vdev.cache.size I increased slightly based off of the increase to kmem_size. I'm not sure if that was the correct way to do things or not, but that's what I went with. After this change I was able to copy over ~35GB of stuff, followed by another roughly 75GB of stuff without issue.
Full disclosure, I do not know what arc_max or vdev.cache.size affects, and have minimal knowledge of what the kmem_size features are for. If anyone can and would like to expand upon what I've written here, I think that would be awesome as the point of this is to help other people who might be in the same situation I was.
The idea behind this thread is that FreeNAS (and other ZFS vendors I believe) recommend 4GB of memory and x86-64 processors when using ZFS. Well the thing is if you're like me, you just want a NAS to fill a few, fairly basic needs. A full on 64bit processor and 4GB of memory starts to sound like an actual system rather than an appliance. Here's my set up.
Pentium 4 3.2GHz
1.5GB DDR2 (hodge podge of different speeds. 2x 512MB and 2x 256MB)
3x 500GB SATA drives in RAID-Z
2x 320GB SATA drives mirrored
FreeNAS 8.0.2-RELEASE
The situation
So what happened was after creating my RAID-Z and samba share, I would start copying my data over to FreeNAS from my backup drive. After about 30GB had transferred, I would get the kernel panic regarding vm.kmem_size. There is also a message that pops up frequently on FreeNAS boxes complaining about vm.kmem_size and vm.kmem_size_max. I will try to dig up the exact kernel panic message when I get home.
The research
So like any of us would do, I immediately hit up the University of Google and come across a number of posts talking about modifying /boot/loader.conf. The solutions said that adding the following lines would clear up problems.
Code:
vm.kmem_size="512M" vm.kmem_size_max="512M"
I also came across the ZFS Tuning Guide, which I believe someone in the LKF pointed me to a number of months ago as well. The Tuning Guide has a lot of good information in it.
The solution
Adding the two kmem_size options to loader.conf did not fix the problem. I did some more reading in the Tuning Guide and came across this piece.
Some workloads need greatly reduced ARC size and the size of VDEV cache. ZFS manages the ARC through a multi-threaded process. If it requires more memory for ARC ZFS will allocate it. Previously it exceeded arc_max (vfs.zfs.arc_max) from time to time, but with 7.3 and 8-stable as of mid-January 2010 this is not the case anymore. On memory constrained systems it is safer to use an arbitrarily low arc_max. For example it is possible to set vm.kmem_size and vm.kmem_size_max to 512M, vfs.zfs.arc_max to 160M, keeping vfs.zfs.vdev.cache.size to half its default size of 10 Megs (setting it to 5 Megs can even achieve better stability, but this depends upon your workload).
There is one example (CySchubert) of ZFS running nicely on a laptop with 768 Megs of physical RAM with the following settings in /boot/loader.conf:
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
So it struck me that I might need to put in the two vfs.zfs pieces in to loader.conf as well. Here's what I came up with.
Code:
vm.kmem_size="512M" vm.kmem_size_max="512M" vfs.zfs.arc_max="60M" vfs.zfs.vdev.cache.size="10M"
Given everything I tried, I wouldn't mind anyone saying I just picked those numbers out of thin air. That's pretty much what I did. I saw the example configuration was based off of 768MB of memory, I have 1.5GB. So I'm guessing my vm.kmem_size and kmem_size_max can be larger. But likely not larger than 512MB since according to the Tuning Guide, that could require a kernel rebuild to actually work. So 512MB was the safe maximum.
vfs.zfs.arc_max and vdev.cache.size I increased slightly based off of the increase to kmem_size. I'm not sure if that was the correct way to do things or not, but that's what I went with. After this change I was able to copy over ~35GB of stuff, followed by another roughly 75GB of stuff without issue.
Full disclosure, I do not know what arc_max or vdev.cache.size affects, and have minimal knowledge of what the kmem_size features are for. If anyone can and would like to expand upon what I've written here, I think that would be awesome as the point of this is to help other people who might be in the same situation I was.