How to stop FreeNAS from using swap

Status
Not open for further replies.

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I haven't seen use_uma make a difference. (On my system it is set to '1'.)

I would set both the free_target sysctls to 65536 and see what happens from there. You can go higher if you still see swap. Or lower if you don't. The value is 4k pages of memory.

The optimal value is going to depend on your use case from what I have gathered in my own experimentation.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I just looked at the original pic you posted. Looks like you are running along with 600+MB of free memory, and still getting swap. I can't see what's to the left of the graph when things started, but assuming the 600+MB of free memory is constant, you may need to start by setting those free_target values to 262,144 (which would be 1GB).

While I have not personally debugged to root cause, what appears to happen in these systems is the evacuation of the arc doesn't happen fast enough when memory is requested. So the OS swaps some stuff out. The free_target controls keep more available memory around, which basically acts like a larger buffer for memory. So if 600+MB of free memory isn't enough you probably need to go higher. Hence the suggestion of 262,144. You can check what the values are before you start, it would be interesting to see what the system thinks it needs.

I noticed you are also running 11.1. Not sure what version. But there was a memory leak, I think related to SMB (not sure if you are running that service). I saw where it was supposed to be fixed in 11.1-U1. However I'm on 11.1-U2 on my backup server and I'm still getting a lot of inactive memory growth. You don't appear to have this issue, at least based on the original pic of the memory reporting. But something to keep in mind if you continue to have issues.
 

bodriye

Explorer
Joined
Mar 27, 2016
Messages
82
I just looked at the original pic you posted. Looks like you are running along with 600+MB of free memory, and still getting swap. I can't see what's to the left of the graph when things started, but assuming the 600+MB of free memory is constant, you may need to start by setting those free_target values to 262,144 (which would be 1GB).

While I have not personally debugged to root cause, what appears to happen in these systems is the evacuation of the arc doesn't happen fast enough when memory is requested. So the OS swaps some stuff out. The free_target controls keep more available memory around, which basically acts like a larger buffer for memory. So if 600+MB of free memory isn't enough you probably need to go higher. Hence the suggestion of 262,144. You can check what the values are before you start, it would be interesting to see what the system thinks it needs.

I noticed you are also running 11.1. Not sure what version. But there was a memory leak, I think related to SMB (not sure if you are running that service). I saw where it was supposed to be fixed in 11.1-U1. However I'm on 11.1-U2 on my backup server and I'm still getting a lot of inactive memory growth. You don't appear to have this issue, at least based on the original pic of the memory reporting. But something to keep in mind if you continue to have issues.

Yeah I noticed 11.1-U1 not having lots of swap usage but 11.1-U2 has swap usage. I do use smb to move files over sometimes and then turn off the service as I only move files not very often. And changed free_target to your suggested value. Thanks.
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
My understanding from the bug tracker is that there is an issue with bhyve VMs not having seatbelts to keep ram usage in check (for a few different scenarios including VMs being created with more ram than is physically available & I believe just plain old going over the ram allotted when the VM was created).

This seems to be the cause of swap usage (everyone seems to be saying that at least). My machine starts to use swap when there is free ram available & it seems like inactive memory doesn't get flushed out. A couple weeks ago I checked my machine (which had been pretty idle for the previous few days) and half the ram, 15-16gb, was marked off as inactive. Transferring files to the machine or using a VM had no effect on it. Hopefully the bug fix helps with this (it almost seems as though the VM stores the data to ram that is already in the ram from when it was read/loaded & sent over via nfs/cif by freenas).
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
My understanding from the bug tracker is that there is an issue with bhyve VMs not having seatbelts to keep ram usage in check (for a few different scenarios including VMs being created with more ram than is physically available & I believe just plain old going over the ram allotted when the VM was created).
Can you post the ticket number for that? I haven't been able to find anything in the tracker.
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44

bodriye

Explorer
Joined
Mar 27, 2016
Messages
82
Just reporting in. I have had about 30 Day uptime with no swap usage with the above mentioned workaround.
Capture.PNG

Capture2.PNG
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44

I went through the thread that was linked by stux (featuring all his research - thank you to both you & stux!) and I would like to double check that I have everything right:

The solution (to prevent the inactive ram & swap useage) is simply to add those two lines to the tuneables section?


And do I need to change the values that you used? I have 64gb of ram, but I’m not sure if tbe amount of ram makes a difference in regard to the tuneables value...

I’ll start off with the values you used and go from there (up if I’m using swap/inactive ram & down if I’m not)

Thank you so much guys!

PS - do the values need to be a multiple of anything specific to work?
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
People like to use powers of two. Not sure if it’s required.

Ideal is to have the smallest value which works. It’s essentially how much unused ram your system will keep free. And unused ram is wasted ram.

But the problem appears to be surging memory use, so you need enough of a buffer so the arc eviction can occur without the memory demanding process causing swap usage.

Continually growing swap usage is a different problem and indicates a memory leak in something
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
Ahh I see, thanks!

So far the tunables have prevented swap from being used, but as you mentioned in the previously linked thread inactive keeps trending towards 1:1 with wired mem.

In the thread someone mentioned another thread & setting vm.defer_swapspace_pageouts = 1 (which I just set so I'll have to see what happens).

In that thread someone mentions another thread (link below) & states the following:

"It looks as though the vm daemon is proactively paging extremely-idle processes to disk, so that in the event of a future allocation of lots of RAM, the daemon doesn't have to pause the new process in order to page out stuff and make room. I don't think it is being forced out by Inactive Memory being held. These appear to be two separate things, in my mind."

https://www.reddit.com/r/freebsd/comments/2k6hq0/swap_virtual_machines/clj2ic1/

some food for thought perhaps. Thanks again!
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
So I used the 65536 page values as a starting point and, as was pointed out earlier, the difference between wired mem and arc grew a little larger, but other than that the swap wasn't used. I ended up having about 3-400 megs free, however the inactive memory issue was not handled as previously mentioned.

So I halved the page value (since everything was going well) and I added vm.defer_swapspace_pageouts = 1 and rebooted the system. I finally got enough action on the machine to use up all the ram and something interesting happened: Once the ram got used up the amount of inactive actually dropped! it seems like most of it went over to "laundry" (which I'm not totally sure what that is) and, on the downside, some swap was used.

If the laundry memory gets cleared out this sounds like a win to me, so I will keep monitoring. Regarding the swap usage, as you can see in the image, it went up to 94.7 mb then dropped to 83.7 and seems to be staying static there for a bit.
I'm holding out some hope for this as a 'fix' since I haven't seen the swap go down once it's been used & in addition to this maybe it was used because I dropped the free page count.

I'll give the machine a little bit more time to run to see how it plays out then I'll increase the page values.

Edit:
regarding Laundry the following link (and the link inside of it) is helpful for me - if I'm understanding it right the freebsd guys knew that cleaning out the inactive bucket as it was required swap usage when the memory was under pressure. The laundry queue is the next step for inactive pages & apparently the daemons are supposed to use the relative sizes of the inactive and laundry queues to know when to launder the memory hopefully leading to better swap usage... just waiting to see if the laundry bucket starts actually going down. I know the swap is going down very slowly.
I guess if the laundry bucket doesn't empty than maybe I need to look at vm_laundry_request and (how to) waking the laundry thread. I think this is via vm_background_launder_target pages (for number of pages?) at vm_background_launder_rate (for rate of laundering KB/s) once the defined ratio of inactive to laundry is reached.
http://freebsd.1045724.x6.nabble.com/PQ-LAUNDRY-td6142103.html
 

Attachments

  • Screen Shot 2018-05-07 at 5.28.42 PM.png
    Screen Shot 2018-05-07 at 5.28.42 PM.png
    577.1 KB · Views: 407
Last edited:

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I think the inactive growing may be a memory leak. Depending on Freenas version, either SMB and/or SNMP can leak. (Fixes expected in 11.1-U5) So if you are using one of those services it may be the cause.
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
I think the inactive growing may be a memory leak. Depending on Freenas version, either SMB and/or SNMP can leak. (Fixes expected in 11.1-U5) So if you are using one of those services it may be the cause.

I have one SMB share, but it doesn’t really get used much.

The inactive growing seems to be from the VMs (as you mentioned there are fixes already done, but not released that alledgedly address all these things) - when VMs are used then go idle that’s when the inactive portion grows.

An update:

So far wired is still at 45gb (it dropped a little, then recovered a little which it has never done before). Swap has gone done a little as well to 79 mb.
Laundry went down slightly to 6gb, but seems to be staying there. However, the VMs started to get use again leading to more inactive memory.

I’m not sure what ratio between inactive and laundry the system looks for before it starts cleaning up the laundry basket, but I guess time will tell.

Anyhow, I’ve never had the wired bucket or ARC pool stay so large over time & the inactive stay under 8gb so I think this might be a little bit of a win.

Depending on how things go (and the effectiveness of the patches that are to be released) a solution may have been found or close to that with hopefully just some tweaks to the way the laundry basket is handled.
If the active bucket doesn’t go up soon I will probably get a little worried. The VMs don’t need much memory to work, but at the moment the media server seems to be a bit choppier, especially when seeking, which I think is attributed to the amount of ram available to it (im guessing that since FreeNAS has the video file in its own ram the file doesnt necessarily have to be stored in the ram allocated to the VM as well, but is prolly why I’m getting only slight performance issues).


Then again, I’m a totalll noob so I could just be talking out of my a** and just gotten lucky by listening to the suggestions of those wiser than I...

Added a photo so people can see how memory/swap useage has changed over the past day
 

Attachments

  • 3B9131EE-AB61-4D15-AC6A-D641E13847AD.jpeg
    3B9131EE-AB61-4D15-AC6A-D641E13847AD.jpeg
    131.1 KB · Views: 385
Last edited:

sam09

Cadet
Joined
Mar 28, 2017
Messages
8
I already reported this in another thread when I thought it was an SMB-related issue, but might post it here as well as it seems more like it is related to OP's and appoli's problem. I have 16Gb memory on a device with one 120Gb SSD, one 1Tb external drive and a 3-mirror 3Tb RAID. Only basic services with Samba and NFS for VM use. I run a Docker VM, which first had 6Gb memory allocated to it, but after a month or so of uptime I started getting error messages about swap failure. Turned out my swap utilization had slowly crept up until it was full. This started even when I had quite a bit of free memory available. I lowered the Docker VM memory to 4Gb, but the problem persisted and soon my swap was full again with hundreds of error messages a day. I checked SMB and it was not showing a memory leak.

End of week 13 and 17 are scheduled scrubs and week 15 is when I restarted the VM with 4Gb memory.

Screen Shot 2018-05-03 at 6.08.41.png


Screen Shot 2018-05-03 at 6.09.23.png


I finally restarted my system with 3Gb memory allocated to the VM (had to disable some containers) and now my memory use has remained stable for several days after the restart, with no swap use. I am waiting to see what happens during the first scrub though...

So, to back others with this problem up and provide more data, there seems to be a problem with the Docker VM and memory use that is not connected to SMB. Hopefully this gets fixed in the next release and I get to allocate some more memory to my VM again. Otherwise I might try the above mentioned tweaks to tunables, thanks for everyone's input and help on those.
 

appoli

Dabbler
Joined
Mar 22, 2017
Messages
44
I highly recommend you try the suggested fix of adding two tunables setting the VM & arc target sizes as equal to one another. They are supposed to be (the fact that they are not is a bug I believe) and should help you out a lot with pretty much no harm from what I can tell. If you’re adventurous you can doing the pageout one as well which seems to be working decently for me.

And just as an FYI (you might know already) they have a fix that has been done for a bit now just waiting on a release & yea it’s a VM issue (might be a SMB issue as well, but I don’t really use SMB so cant comment on that). If you go through some of the threads linked it’s mentioned that the block sizes used via the VMs aren’t consistent (which makes sense) which is precipitating the issues we are seeing when combined with the other bugs around memory useage.

If you do give them a try let us know how things turn out!
 

catnas

Explorer
Joined
Dec 12, 2015
Messages
57
I’m running the latest 11.1 and have started seeing swap usage when previously, under 9.10 there was never any swap used. Unfortunately, I am maxed out on ram for my motherboard. However I have 262M of free RAM.

I am not running any VMs, and only a few jails. Again no change from under 9.10.

Is this a bug?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Status
Not open for further replies.
Top