Kmem related Panics on system with 2G RAM (yes, another one)

Status
Not open for further replies.
T

TempestDash

Guest
Hi, I'm new to FreeNAS entirely but am trying to get my feet wet and learn because I'm tired of dealing with Dynamic drives on W2k8 and the overhead using Windows produces, but I'm running into a few stumbling blocks and searching the forums is getting me a LOT of information on 7.x but I have no idea if this info is still valid for 8.2.

Thus:

My system information is as follows:
Code:
OS Version:	FreeBSD 8.2-RELEASE-p2
Platform:	AMD Athlon(tm) 64 X2 Dual Core Processor 5600+
Memory:	1903MB
Load Average:	0.03, 0.17, 0.11
FreeNAS Build:	FreeNAS-8.0.1-BETA4-i386


I have just freshly installed FreeNAS on a speedy 4GB CF card plugged into an CF->IDE adapter and intend to do a 4-drive RAIDZ1 with 1.5 TB drives over SATA.

Without changing any settings, I get an alert during boot warning me that the minimum kmem size is 512MB and that I should add a line to loader.conf to tune. Fine. Initially I ignored that.

After creating my zvol I started copying data and ran into panics every time warning about kmem size.

After searching for hours for what to do about this and having a hell of a lot of trouble getting confident information on 8.x as opposed to 7.x; I decided to add vms.kmem_size and size_max lines to the loader, setting them equal to my RAM size (2G), and disable prefetch.

This caused the system to hang on startup with a warning that there was a problem allocating memory. I jumped into the boot loader and unset the two values concerning kmem and it booted without problem.

I tried half my memory (1G) and that had same result. I tried listing in MB instead of GB (1024M) and that had the same error. I finally tried EXACTLY the minimum stated, 512M.

That booted. The warning went away as well.

But I have more than half a gig of memory, and all the reporting graphs says that I almost always have 1.5G of physical memory free, even when transferring files.

So my question is, why wouldn't I be able to set a kmem over 512M? Is this is a user error on my part due to my massive ignorance regarding BSD, or some quirk of FreeNAS I have not taken into account?

Incidentally, I don't seem to be running into the panics as frequently, but file transfer rates are slower than I'm used to. 17MB/sec transfers between two computers over a gigabit connection. On the similar RAID5 setup I had through software RAID in windows, I was getting at least four times that speed.

I really just need some advice here. If there is more information needed, please tell me exactly how to get it, as I can only barely get around using the console through ssh.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
You should use the AMD64 ISO, I noticed you are using the i386 version. After you do that the settings in loader.conf will work for you. With the i386 version the highest you can go is about 768M. The other thing is, DON"T make it the entire 2G, *maybe* 1.5G, but you need a balance between the kernel and ZFS. Probably make the 'max' variable = 1.5G and the other about half of that to start with. The vm_kmem is like the minimum/starting point, and the max, well that is the limit to what you want to use. Give that a try and post back, and don't forget to use the AMD64 version. I know it seems strange, but it's for 64bit processors which you have.
 

globus999

Contributor
Joined
Jun 9, 2011
Messages
105
There is a lot of ZFS kernel tuning info in the BSD forums. What I found that works for me (x64) is the following:

"X64

If this is the 64-bit version of FreeBSD, don't meddle with vm.kmem_size_max and set vm.kmem_size to 1.5x RAM. In your case, set it to 12 GB.

Then set vfs.zfs.arc_max to somewhere around half of your RAM, depending on what else you will be running on that box.

This really is a FAQ, and searching the forums will bring up many threads on this issue. As will searching the FreeBSD mailing lists.
"

HTH

Oh, BTW, FN8 is a hog wrt RAM. FN7 runs happily along with 512 Mb RAM, leaving 1.5 Gb to ZFS. FN8 on the other hand, requires 1.5 Gb of RAM leaving only 512 Mb for ZFS, which is quite smelly. These are the results I got from my test box. So, using "only" 2Gb for FN8 will leave you with only 512 Mb for ZFS. So, your experience is 100% correct. The solution above is for RAM > 2Gb.
 
T

TempestDash

Guest
You should use the AMD64 ISO, I noticed you are using the i386 version.

*stares*

Holy cow! I have no idea how I missed that. I had downloaded both versions but had intended to use the amd64 version. Apparently I got my filenames mixed up when I was flashing the CF card.

I'm feeling really stupid right now for not noticing that. Thank you!

@globus999: Is 4GB enough to get by with passable performance on a 4TB array? Because my motherboard uses DDR2-800 with only two slots (it's a micro-ITX board), it's starting to get expensive to buy larger sticks. 8GB (4GBx2) looks like nearly $100 on newegg and at that point I might as well buy cheap newer board that supports DDR3 for future proofing...
 
T

TempestDash

Guest
I swapped out for the 64bit version of Beta4 and set the vm.kmem_size and vfs.zfs.arc_max values in loader.conf as suggested below.

If this is the 64-bit version of FreeBSD, don't meddle with vm.kmem_size_max and set vm.kmem_size to 1.5x RAM. In your case, set it to 12 GB.

Then set vfs.zfs.arc_max to somewhere around half of your RAM, depending on what else you will be running on that box.

Things look stable right now (actually, after setting the kmem_size to 512M in the 32-bit the panics went away) and speed has doubled during file transfers. Not ideal, but much better than before.

I did run into one issue where there was a warning about an 'indefinite wait' on a swap file which caused one file transfer to stall, but after restarting the transfer everything has moved fine. I've transferred over about 0.5TB so far without a hang, so I think it's working.

I would like better performance, though, so I'm going to see if I can get to 4GB at a reasonable price.

Thanks again for the help! I'm still pretty ashamed that it was a pretty obvious oversight on my part that was causing the trouble.
 
Status
Not open for further replies.
Top