FreeNAS ignores newly added RAM

Status
Not open for further replies.

Pilot1400

Cadet
Joined
May 23, 2015
Messages
5
Hi everyone,

not sure if this is the right place to post but since I am really rather new to FreeNAS this might be a good fit. I set up new box, HP SE316M1 with 16GB, using FreeNAS 9.3 Stable, ISCSI and everything works nicely. However, my ARC hit rates could be better and following advise here on the forums I added another 16GB of RAM. The RAM is recognized by the server and also shows up on FreeNAS, but is ignored. The system continues to behave as if it would still have 16GB only.

Here is what I did so far, following advice I found here on some threads:
- reboot with autotune on, did nothing
- reboot with autotune off, did nothing
- changed values on tunables, namely vfs.zfs.arc_max and vm.kmem_size, reboot, did nothing
- turned autotune off, deleted all tunables, reboot, turned autotune on, reboot -> changes were made:
vfs.zfs.arc_max was set to 26843545600 (~ 25GB) and vm.kmem_size to 42902645760 (~39GB?) and the ARC was reset. But after some use, it still peaks out at 12.7GB and leaves about 17GB unused, just like before with only 16GB total RAM.

So what should I do to make FreeNAS use this additional memory? I attached a few more screenshots with actual data.

Thanks,
Alex
 

Attachments

  • sysinfo.jpg
    sysinfo.jpg
    186.5 KB · Views: 236
  • tunables.jpg
    tunables.jpg
    250.4 KB · Views: 255
  • report01.jpg
    report01.jpg
    260.4 KB · Views: 249
  • report02.jpg
    report02.jpg
    253.9 KB · Views: 239

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And you're doing stuff that would actually be cached by ARC? Because report01 makes it look like very little's going on on the system.
 

Pilot1400

Cadet
Joined
May 23, 2015
Messages
5
The system is not in productive use right now yet so I doubt it really needs the bigger ARC at the moment. However, as it is growing and then always capped at exactly the same size and exactly 16GB are left unused I doubt that this is expected behaviour.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I don't see how you can expect to learn anything useful about the way your system will utilize ARC and what kind of hit rate to expect without giving it the same kind of workload you expect it to have in production.

Regarding autotune, if anything is capping ARC size, it's that (you've already seen autotune cap your ARC at 25GB).
 

Pilot1400

Cadet
Joined
May 23, 2015
Messages
5
Well I don't know how you are testing your systems, but I try to identify and tackle one problem at a time. Currently I am not trying to learn other things about my system but only how to make use of it's RAM. Let's just assume (because I did) I put enough load on the system that ARC filled up and could fill beyond that 12.7 GB limit.

Considering autotune, it certainly has it's hand in this. However, the system is ignoring the 25 Gb cap anyway and sticking to the old cap of 12.7 Gb that has been established with 16 Gb total RAM. How can I solve this problem?

Thanks,
Alex
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well I don't know how you are testing your systems, but I try to identify and tackle one problem at a time. Currently I am not trying to learn other things about my system but only how to make use of it's RAM. Let's just assume (because I did) I put enough load on the system that ARC filled up and could fill beyond that 12.7 GB limit.

Considering autotune, it certainly has it's hand in this. However, the system is ignoring the 25 Gb cap anyway and sticking to the old cap of 12.7 Gb that has been established with 16 Gb total RAM. How can I solve this problem?

Thanks,
Alex

You probably need to convince us that there's stuff going on in the system that'd be cached by ARC, and that you haven't made any obvious errors. You did reboot after enabling autotune, yes? What's the output of "sysctl vfs.zfs.arc_max"

Code:
% sysctl vfs.zfs.arc_max
vfs.zfs.arc_max: 22983332751


That's a 32GB system but I can't recall if that was provided by autotune or not.
 

Pilot1400

Cadet
Joined
May 23, 2015
Messages
5
Fair enough. The FreeNAS system provides only iSCSI, there is a total of 6 (Hyper-V) VMs stored on FreeNAS, as well as an Alfresco repository with ~ 25,000 docs and images. There are two 1Gb NICs on the iSCSCI Portal, on a separate network with separate switches and MPIO access from the Hypervisors. This is all working nicely and transfer rates are fine, no problems there. I have constant traffic of ~ 15 Mbyte on each of the NICs with occasional burst and some peaks of up to 120 Mbyte combined when doing backups thanks to MPIO. You don't see any of this on the screenshots because I made them after a session of autotune changes and reboots which had the box do nothing else.

ARC fills up to those 12.7 GB within minutes after firing up the VMs and peaks there. Total data volume on FreeNAS is 750 GB, with the VMs taking up 150GB. So I am pretty sure my total work set > 12.7 Gb. Once ARC fills up those 12.7 Gb, hit rate is "down" to 80-81%. I am sure that the additional 16 GB could help.

Yes, I rebooted after enabling autotune. Considering the values, it actually changed something but those values are ignored. Also the vm.kmem_size looks kind of weird for 32 Gb. Don't have access to the box from here, will check the output tomorrow.

Thanks,
Alex
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
You probably need to convince us that there's stuff going on in the system that'd be cached by ARC, and that you haven't made any obvious errors. You did reboot after enabling autotune, yes? What's the output of "sysctl vfs.zfs.arc_max"

Code:
% sysctl vfs.zfs.arc_max
vfs.zfs.arc_max: 22983332751


That's a 32GB system but I can't recall if that was provided by autotune or not.

My system with 32GB and no tuning at all done to it, just for comparison.

Code:
vfs.zfs.arc_max: 32191201280
 

Pilot1400

Cadet
Joined
May 23, 2015
Messages
5
Code:
sysctl vfs.zfs.arc_max
13651230848


which translates to the 12.7 GB the GUI shows as well.

While looking at it, both the vfs.zfs.arc_max and vm.kmem_size tunables are of type "loader" on my system, while the rest is "sysctl". Is this ok or should they be sysctl as well?

Alex
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I believe arc_max and kmem_size are loader. Loader is something that is set before the kernel loads (configured in the kernel loader). Many things are tunable AFTER the kernel boots, but some key system parameters (like how much memory to consume) are determined (or overridden) at boot time.

So either you didn't reboot or your loader config isn't loading or something's being rejected when the loader config is loaded. I suggest rebooting the machine and carefully watching the boot messages. Unfortunately the interesting stuff will probably whip by and be hard to catch.
 
Status
Not open for further replies.
Top