How I did ZFS through FreeNAS on a 32bit proc with low memory.

Status
Not open for further replies.

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
Alright folks, so I was toying around with my FreeNAS 8.0.2 box and I encountered a problem where when I would copy data via samba to the ZFS share, the machine would kernel panic after about 30 gigabytes. This is my story.

The idea behind this thread is that FreeNAS (and other ZFS vendors I believe) recommend 4GB of memory and x86-64 processors when using ZFS. Well the thing is if you're like me, you just want a NAS to fill a few, fairly basic needs. A full on 64bit processor and 4GB of memory starts to sound like an actual system rather than an appliance. Here's my set up.

Pentium 4 3.2GHz
1.5GB DDR2 (hodge podge of different speeds. 2x 512MB and 2x 256MB)
3x 500GB SATA drives in RAID-Z
2x 320GB SATA drives mirrored
FreeNAS 8.0.2-RELEASE


The situation

So what happened was after creating my RAID-Z and samba share, I would start copying my data over to FreeNAS from my backup drive. After about 30GB had transferred, I would get the kernel panic regarding vm.kmem_size. There is also a message that pops up frequently on FreeNAS boxes complaining about vm.kmem_size and vm.kmem_size_max. I will try to dig up the exact kernel panic message when I get home.


The research

So like any of us would do, I immediately hit up the University of Google and come across a number of posts talking about modifying /boot/loader.conf. The solutions said that adding the following lines would clear up problems.

Code:
vm.kmem_size="512M"
vm.kmem_size_max="512M"


I also came across the ZFS Tuning Guide, which I believe someone in the LKF pointed me to a number of months ago as well. The Tuning Guide has a lot of good information in it.


The solution

Adding the two kmem_size options to loader.conf did not fix the problem. I did some more reading in the Tuning Guide and came across this piece.

Some workloads need greatly reduced ARC size and the size of VDEV cache. ZFS manages the ARC through a multi-threaded process. If it requires more memory for ARC ZFS will allocate it. Previously it exceeded arc_max (vfs.zfs.arc_max) from time to time, but with 7.3 and 8-stable as of mid-January 2010 this is not the case anymore. On memory constrained systems it is safer to use an arbitrarily low arc_max. For example it is possible to set vm.kmem_size and vm.kmem_size_max to 512M, vfs.zfs.arc_max to 160M, keeping vfs.zfs.vdev.cache.size to half its default size of 10 Megs (setting it to 5 Megs can even achieve better stability, but this depends upon your workload).

There is one example (CySchubert) of ZFS running nicely on a laptop with 768 Megs of physical RAM with the following settings in /boot/loader.conf:

vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"

So it struck me that I might need to put in the two vfs.zfs pieces in to loader.conf as well. Here's what I came up with.

Code:
vm.kmem_size="512M"
vm.kmem_size_max="512M"
vfs.zfs.arc_max="60M"
vfs.zfs.vdev.cache.size="10M"


Given everything I tried, I wouldn't mind anyone saying I just picked those numbers out of thin air. That's pretty much what I did. I saw the example configuration was based off of 768MB of memory, I have 1.5GB. So I'm guessing my vm.kmem_size and kmem_size_max can be larger. But likely not larger than 512MB since according to the Tuning Guide, that could require a kernel rebuild to actually work. So 512MB was the safe maximum.

vfs.zfs.arc_max and vdev.cache.size I increased slightly based off of the increase to kmem_size. I'm not sure if that was the correct way to do things or not, but that's what I went with. After this change I was able to copy over ~35GB of stuff, followed by another roughly 75GB of stuff without issue.

Full disclosure, I do not know what arc_max or vdev.cache.size affects, and have minimal knowledge of what the kmem_size features are for. If anyone can and would like to expand upon what I've written here, I think that would be awesome as the point of this is to help other people who might be in the same situation I was.
 

Durkatlon

Patron
Joined
Aug 19, 2011
Messages
414
Thanks for posting. I have been extolling the virtues of low arc_max here for a while, but people continue to believe that ZFS with less than a bazillion gigs of RAM is impossible.
 

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
Do you know more about arc_max? I can't seem to find other references to it in the Tuning Guide. All I know is it saved my bacon. :)
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
http://wiki.freebsd.org/ZFSTuningGuide

Generic ARC discussion

The value for vfs.zfs.arc_max needs to be smaller than the value for vm.kmem_size (not only ZFS is using the kmem).

To monitor the ARC, you can use the script at http://jhell.googlecode.com/files/arc_summary.pl (ported from the Solaris version at http://cuddletech.com/arc_summary/). Another script which may be helpful is http://jhell.googlecode.com/files/arcstat.pl (ported from the Solaris version at http://blogs.sun.com/realneel/entry/zfs_arc_statistics).

To improve the random read performance, a separate L2ARC device can be used (zpool add <pool> cache <device>). A cheap solution is to add an USB memory stick (see http://www.leidinger.net/blog/2010/02/10/making-zfs-faster/). The high performance solution is to add a SSD.

Using a L2ARC device will increase the amount of memory ZFS needs to allocate, see http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34674.html for more info.

i386

Typically you need to increase vm.kmem_size_max and vm.kmem_size (with vm.kmem_size_max >= vm.kmem_size) to not get kernel panics (kmem too small). The value depends upon the workload. If you need to extend them beyond 512M, you need to recompile your kernel with increased KVA_PAGES option, e.g. add the following line to your kernel configuration file to increase available space for vm.kmem_size beyond 1 GB:

options KVA_PAGES=512

To chose a good value for KVA_PAGES read the explanation in the sys/i386/conf/NOTES file.

By default the kernel receives 1 GB of the 4 GB of address space available on the i386 architecture, and this is used for all of the kernel address space needs, not just the kmem map. By increasing KVA_PAGES you can allocate a larger proportion of the 4 GB address space to the kernel (2 GB in the above example), allowing more room to increase vm.kmem_size. The trade-off is that user applications have less address space available, and some programs (e.g. those that rely on mapping data at a fixed address that is now in the kernel address space, or which require close to the full 3 GB of address space themselves) may no longer run. If you change KVA_PAGES and the system reboots (no panic) after running a while this may be because the address space for userland applications is too small now.

For *really* memory constrained systems it is also recommended to strip out as many unused drivers and options from the kernel (which will free a couple of MB of memory). A stable configuration with vm.kmem_size="1536M" has been reported using an unmodified 7.0-RELEASE kernel, relatively sparse drivers as required for the hardware and options KVA_PAGES=512.

Some workloads need greatly reduced ARC size and the size of VDEV cache. ZFS manages the ARC through a multi-threaded process. If it requires more memory for ARC ZFS will allocate it. Previously it exceeded arc_max (vfs.zfs.arc_max) from time to time, but with 7.3 and 8-stable as of mid-January 2010 this is not the case anymore. On memory constrained systems it is safer to use an arbitrarily low arc_max. For example it is possible to set vm.kmem_size and vm.kmem_size_max to 512M, vfs.zfs.arc_max to 160M, keeping vfs.zfs.vdev.cache.size to half its default size of 10 Megs (setting it to 5 Megs can even achieve better stability, but this depends upon your workload).

There is one example (CySchubert) of ZFS running nicely on a laptop with 768 Megs of physical RAM with the following settings in /boot/loader.conf:

vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
Kernel memory should be monitored while tuning to ensure a comfortable amount of free kernel address space. The following script will summarize kernel memory utilization and assist in tuning arc_max and VDEV cache size.

#!/bin/sh -

TEXT=`kldstat | awk 'BEGIN {print "16i 0";} NR>1 {print toupper($4) "+"} END {print "p"}' | dc`
DATA=`vmstat -m | sed -Ee '1s/.*/0/;s/.* ([0-9]+)K.*/\1+/;$s/$/1024*p/' | dc`
TOTAL=$((DATA + TEXT))

echo TEXT=$TEXT, `echo $TEXT | awk '{print $1/1048576 " MB"}'`
echo DATA=$DATA, `echo $DATA | awk '{print $1/1048576 " MB"}'`
echo TOTAL=$TOTAL, `echo $TOTAL | awk '{print $1/1048576 " MB"}'`
Note: Perhaps there is a more precise way to calculate / measure how large of a vm.kmem_size setting can be used with a particular kernel, but the authors of this wiki do not know it. Experimentation does work. However, if you set vm.kmem_size too high in loader.conf, the kernel will panic on boot. You can fix this by dropping to the boot loader prompt and typing set vm.kmem_size="512M" (or a similar smaller number known to work.)

The vm.kmem_size_max setting is not used directly during the system operation (i.e. it is not a limit which kmem can "grow" into) but for initial autoconfiguration of various system settings, the most important of which for this discussion is the ARC size. If kmem_size and arc_max are tuned manually, kmem_size_max will be ignored, but it is still required to be set.

The issue of kernel memory exhaustion is a complex one, involving the interaction between disk speeds, application loads and the special caching ZFS does. Faster drives will write the cached data faster but will also fill the caches up faster. Generally, larger and faster drives will need more memory for ZFS.

To increase performance, you may increase kern.maxvnodes (in /etc/sysctl.conf) way up if you have the RAM for it (e.g. 400000 for a 2GB system). On i386, keep an eye on vfs.numvnodes during production to see where it stabilizes. (AMD64 uses direct mapping for vnodes, so you don't have to worry about address space for vnodes on this architecture).
 

Durkatlon

Patron
Joined
Aug 19, 2011
Messages
414
And here is what I have set on my FreeNAS7 boxes which have been rock-solid stable for aeons:
Code:
vm.kmem_size_max="1024M"
vm.kmem_size="1024M"
vfs.zfs.arc_max="100M"

Note that the FN7 builds have KVA_PAGES=512 by default, so you can set kmem_map sizes this large. With FN8 i386 default builds you will have to restrict yourself to 512M.

My FN8 boxes are 64-bit so none of this stuff really applies much.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
[Y]ou just want a NAS to fill a few, fairly basic needs. A full on 64bit processor and 4GB of memory starts to sound like an actual system rather than an appliance.

Yeah, there's some truth to that. But that's just the way of things. I like to think about it historically.

Back around 1990, a big beefy Sun 3/260 fileserver might have 32MB of RAM and a 68020, and an end user's workstation might have 12MB of RAM. You could serve up NFS and YP and all that. :smile: In the mid '90's, a FreeBSD box with a whopping 256MB of RAM was considered a Beefy Beast, and could really do just about any task. But end user platforms had increased in requirements to need at least 32MB to operate graphically comfortably... In the mid 2000's, servers with 4 or 8GB of RAM were commonplace and "big". Now 32-96GB is "big".

Of course, that old 3/260 can still serve up NFS files just as efficiently as it did back then, but that's at 10Mbps with a 25MHz processor.

In that same manner, pretty much any "appliance" NAS you can find will fall somewhere in this spectrum; today's SoC systems might be equivalent to a late '90's server, for example. And they'll be able to serve files just as competently.

ZFS started life as an unreasonable resource pig. Back in the mid 2000's, a requirement that you set aside gigs of RAM just so your filesystem could be happy would have been fairly onerous to many server designers. One could say it's still an unreasonable resource pig, as many computers still max out at 8GB. But at least that much memory is no longer obscenely expensive. The thing is, in ten years, ZFS's requirements won't seem particularly bad even for an embedded application. But all of us here? We're early adopters. Running an early adopter platform. The FreeNAS developers have enough work on their hands just making the platform work right under unstressed conditions. It'd be *nice* if they'd provide a smaller-footprint tunable or something like that, but supporting this has got to be rough even as it is. That said, it's also great to have people willing to experiment with tuning, so pay careful attention to Durkatlon in particular, who seems to have some quality input in this area.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
@praecorloth,

I've added a link to this thread in the FAQ. Thank you for your helpful work in explaining how to make ZFS work better on i386 and low RAM.
 
Joined
Nov 14, 2011
Messages
5
I pulled these values out of the zfstuner extension for freenas7 by daoyama. For those unfamiliar with this extension it would write values to loader.conf for you based on the amount of ram you have installed or the amount of ram you wanted to dedicate just to ZFS. These values were created with freebsd 7 in mind but, I believe them the be valid for freebsd 8.

sys: I386 RAM: 512MB - vm.kmem_size "340M", vfs.zfs.arc_min "60M", vfs.zfs.arc_max "60M"
sys: I386 RAM: 1024MB - vm.kmem_size "512M", vfs.zfs.arc_min "128M", vfs.zfs.arc_max "128M"
sys: I386 RAM: 1536MB - vm.kmem_size "1024M", vfs.zfs.arc_min "256M", vfs.zfs.arc_max "256M"
sys: I386 RAM: 2048MB - vm.kmem_size "1400M", vfs.zfs.arc_min "400M", vfs.zfs.arc_max "400M"
sys: 64 RAM: 2GB - vm.kmem_size "1536M", vfs.zfs.arc_min "512M", vfs.zfs.arc_max "512M"
sys: 64 RAM: 3GB - vm.kmem_size "2048M", vfs.zfs.arc_min "1024M", vfs.zfs.arc_max "1024M"
sys: 64 RAM: 4GB - vm.kmem_size "2560M", vfs.zfs.arc_min "1536M", vfs.zfs.arc_max "1536M"
sys: 64 RAM: 6GB - vm.kmem_size "4608M", vfs.zfs.arc_min "3072M", vfs.zfs.arc_max "3072M"
sys: 64 RAM: 8GB - vm.kmem_size "6656M", vfs.zfs.arc_min "5120M", vfs.zfs.arc_max "5120M"
sys: 64 RAM: 12GB - vm.kmem_size "10752M", vfs.zfs.arc_min "9216M", vfs.zfs.arc_max "9216M"
sys: 64 RAM: 16GB - vm.kmem_size "14336M", vfs.zfs.arc_min "12288M", vfs.zfs.arc_max "12288M"
sys: 64 RAM: 24GB - vm.kmem_size "22528M", vfs.zfs.arc_min "20480M", vfs.zfs.arc_max "20480M"
sys: 64 RAM: 32GB - vm.kmem_size "30720M", vfs.zfs.arc_min "28672M", vfs.zfs.arc_max "28672M"
sys: 64 RAM: 48GB - vm.kmem_size "47104M", vfs.zfs.arc_min "45056M", vfs.zfs.arc_max "45056M"
sys: 64 RAM: 64GB - vm.kmem_size "63488M", vfs.zfs.arc_min "61440M", vfs.zfs.arc_max "61440M"

In all the examples I have seen vm.kmem_size_max and vm.kmem_size are set to the same thing.
 

Bert Rolston

Cadet
Joined
Nov 23, 2011
Messages
8
Hi folks,

I'm new to FreeNAS but not *nix operating systems, Fedora Core being my OS of choice.

I wanted a simple NAS solution and found FreeNAS.
So I've been playing with FreeNAS now for a few days, and seem to have most of the wrinkles ironed out.

I've installed 8.0.2-i386 on a C2Duo @ 3GHz with 4GB RAM and a couple of 250GB SATA drives.
I set one drive up as a ZFS volume then created datasets.

I happened to notice two messages, on the screen attached to the NAS, advising prefetch was disabled in i386, and a minimum kmem_size.
Both messages mention adding stuff to the /boot/loader.conf file.

So I SSH into the NAS and try editing /boot/loader.conf using vi ....... BUT ...... it's located on a Read Only filesystem.

So my question is

How did you edit the loader.conf file? :confused:


TIA
Bert
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
So I SSH into the NAS and try editing /boot/loader.conf using vi ....... BUT ...... it's located on a Read Only filesystem.

So my question is

How did you edit the loader.conf file? :confused:

First of all you have to make the filesystem writeable, then edit the file, then return it to being a read-only filesystem. So...

Code:
mount -uw /
vi /boot/loader.conf
sync
mount -ur /
reboot
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I've installed 8.0.2-i386 on a C2Duo @ 3GHz with 4GB RAM and a couple of 250GB SATA drives.

Hi Bert, Isn't your C2Duo a 64bit processor? I can't keep up with all the different variations and names of Intel processors, but I'd guess that it is. If I'm right, then you should use the AMD64 installation, it's for both AMD & Intel 64bit processors and will save you the trouble of needing to make the changes you're doing. Of course I could be completely wrong in which case, never mind ;-)
 

Bert Rolston

Cadet
Joined
Nov 23, 2011
Messages
8
Hey protosd,

It is 64 bit. I got stuck on a particular track 8-( because I've downloaded the 64 bit ISO and it didn't work.
So I guess I'll try downloading it again in case I got a corrupted download.

Thanks for the reply.

Bert
 

Bert Rolston

Cadet
Joined
Nov 23, 2011
Messages
8
Hey Milhouse,

Thanks for the info.
After upgrading to the 64 bit version I still got the prefetch message.
So I used your code and successfully edited the loader.conf file.

Now for more testing!

Regards,
Bert
 

Bert Rolston

Cadet
Joined
Nov 23, 2011
Messages
8
Hi protosd,

It turns out my first 64bit download was corrupt.

Thanks again for helping me to take of the blinkers!

Cheers,
Bert
 
Status
Not open for further replies.
Top