Low memory utilization FreeNAS 8.0.4 Beta2

Status
Not open for further replies.

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
I'm not sure if this is a bug or on purpose.

There is almost no memory usage on FreeNAS 8.0.4 Beta2. I upgraded from 8.0.3 where my memory usage was always at least 12GB of my 16GB total. Now FreeNAS is only using 150MB?

Just concerned as I know ZFS wants to use lots of ram to wrk correctly but it doesn't seem to want to use it anymore. Is anyone else getting this behavior? I will try some benchmarks and see if my read/write performance changed at all.

Thanks
 

xbmcg

Explorer
Joined
Feb 6, 2012
Messages
79
I think, the "free memory" is used for caching. If you do not use compression and deduplication, and just do raid1 the system is quite idle in terms of cpu / memory usage since there are not many calculations necessary, also there is no need to hold all hashes for the data chunks in memory to find double entries or so.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
I may have found the problem
running ./zfs-stat -a

vfs.zfs.arc_max=671088640
vm.kmem_size=1073741824
vm.kmem_size_max=1073741824

my kernel (including zfs) can only use 1GB, now why did freenas 8.0.4Beta2 64bit add in that limitation???
I tried commenting out the lines in /boot/loader.conf, and running "service sysctl restart" but the valued dont change when I run zfs-stat. I wonder if the settings are hardcoded somewhere else. An I wonder why someone threw this limitation in?

EDIT
after a reboot the max's changed, ram is used again... you might not want to comment out vm.kmem_size_max, mine changed to 330GB! maybe just put it to a sane value like all of your ram (in my case 16 gigs)
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I think if you look at the announcement thread for 8.04, Gcooper mentions setting some limits for ZFS. You may want to post in that thread so he sees the feedback.

As far as deduplication, it does not exist in this version of ZFS (v15). This is one of the features added in v28 that so many people are looking for.


Some of the performance caps will be derived from the values in the FreeNAS 0.7 (legacy) branch, with minor tuning done to get the best bang for the user's "buck" with older hardware and FreeNAS 8.x. I'll be sure to run my tests on lower spec'ed hardware laying around iX (there are some unused Dell 1Us I found that will work for testing). There will be a writeup available before 8.0.4-RELEASE with general guidelines on performance / ZFS tuning.

I'll be on the IRC channel when working on the release cycle and be reachable on the forums via PM. Please test out this release prior to the final drop if you have an opportunity because I prefer to not repeat 8.0.3-RELEASE-p1.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
thanks i'll reply on that thread. I noticed that even though the two max limits i mentions earlier are now higher my ram usage is still only 8%. I havent streamed media from my freenas box so im assumming it'll creep up but in 8.0.3 it booted up already utilizing 12GB.
 
G

gcooper

Guest
thanks i'll reply on that thread. I noticed that even though the two max limits i mentions earlier are now higher my ram usage is still only 8%. I havent streamed media from my freenas box so im assumming it'll creep up but in 8.0.3 it booted up already utilizing 12GB.

Thanks for the feedback. Others with higher powered machines have said similar.

Comment out vm.kmem_size_max in /boot/loader.conf, reboot, and let me know how things go please.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi guys,

Quick question....where to I find this zfs-stat that Joshua mentioned if post #3?

-Will
 
G

gcooper

Guest
Hi guys,

Quick question....where to I find this zfs-stat that Joshua mentioned if post #3?

Not sure, but you can try out /usr/local/www/freenasUI/tools/arc_status.py, etc from jhixson in -RC1 (it is a python version of jhell's tool originally written in perl).
 

propeller

Cadet
Joined
Sep 21, 2011
Messages
8
system down with no reaction

after system down second time with 8.04Beta2 and error message at console like "kmem_size is 1073741824... no more memory alloc possible ... " or so I found this thread. It was the first time the system freezes or halted only to solve with switch the power button. I guess it happened during large file transfers, but don't know.
I have 48 GByte of memory, but could not boot freenas in VMware ESXi5 (mentioned in another thread) , so I have a freenas -machine with 48 GByte.
Which values would you recommend for (with actual values)
vm.kmem_size="1024M"
vm.kmam_size_max="1024M"
vfs.zfs.arc_max="1073741824"
thank you
 
G

gcooper

Guest
Given the discussion and testing notes, I'm going to yank setting everything but nmbclusters. Long story short the iX autotuner was donated to the project and will be used as a basis for automatic system tuning, similar to zfskerntune. We're working out the kinks, but a first draft should be available in 8.2.0-BETA1.
 
Status
Not open for further replies.
Top