SOLVED memory and swap usage

Status
Not open for further replies.

encbox

Dabbler
Joined
Mar 27, 2017
Messages
25
Hi,

Freenas is using quite a lot of swap, even while there is free memory. I want to understand, what is going on, where should I start?

Update: I should add, I am running freenas 11.1-U5
 

Attachments

  • Bildschirmfoto 2018-06-21 um 19.26.00.png
    Bildschirmfoto 2018-06-21 um 19.26.00.png
    296.7 KB · Views: 482
  • Bildschirmfoto 2018-06-21 um 19.24.15.png
    Bildschirmfoto 2018-06-21 um 19.24.15.png
    179.1 KB · Views: 538
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What's the workload?
 

encbox

Dabbler
Joined
Mar 27, 2017
Messages
25
Just 2 people are using this server. Some backup tasks are running nightly to another machine and bhyve is used for a univention corporate server, which also is just used by the same 2 people. I have 3 3TB disks, the volume ist about 75 % full.
Bildschirmfoto 2018-06-21 um 20.02.12.png


Intel(R) Xeon(R) CPU E3-1220L V2 @ 2.30GHz

Bildschirmfoto 2018-06-21 um 20.03.57.png
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
FreeNAS has some tendency since version 9.10 to use swap where common sense would tend to say that swap usage should be not necessary, may be due to a timing problem related to ZFS ARC usage. Using one of the two following workarounds this swap usage possibly can be minimized or even avoided at all.

(1) Reducing excess swap usage substantially using Stux' pagein script periodically
https://forums.freenas.org/index.ph...ny-used-swap-to-prevent-kernel-crashes.46206/

(2) Trying to avoid excess swap usage at all using certain tunables, specifically vm.v_free_target and vfs.zfs.arc_free_target as discussed here
https://forums.freenas.org/index.php?threads/swap-with-9-10.42749/page-5#post-453978
Using 32768 instead of 65536 as values might help to not waste main memory.

Some extra questions to compare your situation with my previous observations: Were you able to keep track of swap usage in more detail in the past? Do the weekly/monthly/yearly graphs give some insight? Are scrubs scheduled on this system? Is the beginning of swap usage (after a reboot, for example) related to scrubs?
 
Last edited:

encbox

Dabbler
Joined
Mar 27, 2017
Messages
25
Thanks for all your replies. There is only a jail running with duplicity for nightly backups and bhyve with univention corporate server. Activated services are afp (time machine), ftp, smb, ssh and s.m.a.r.t.
I think its the bhyve since the problem seems to start about the time when I installed ucs.
The swap space continues to grow until
Code:
Jun 21 05:43:02 freenas swap_pager_getswapspace(25): failed

Jun 21 05:43:02 freenas swap_pager_getswapspace(32): failed

Jun 21 05:43:02 freenas swap_pager_getswapspace(24): failed

Jun 21 05:43:02 freenas swap_pager_getswapspace(27): failed

Jun 21 05:43:02 freenas swap_pager_getswapspace(30): failed

Jun 21 05:43:02 freenas swap_pager_getswapspace(20): failed


then I usually reboot and it works for a while. Yesterday I started to give only 2GB to the bhyve instead of 3GB in hope that theres more memory for freenas.
 

Attachments

  • Bildschirmfoto 2018-06-22 um 17.15.53.png
    Bildschirmfoto 2018-06-22 um 17.15.53.png
    264.2 KB · Views: 421

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Joined
Oct 7, 2016
Messages
29
Hi,

Freenas is using quite a lot of swap, even while there is free memory. I want to understand, what is going on, where should I start?

Update: I should add, I am running freenas 11.1-U5

I ran into similar issues.
I no longer ran out of swap after I set the sysctl tunable vm.disable_swapspace_pageouts to 1 .

Setting this sysctl will disable paging dirty pages completely. For when you would rather kill a process than write to swapspace.

This solved it for me but you might get processes killed if you are really running low on memory, which my system apparently wasn't.
So YMMV.

Paul
 

encbox

Dabbler
Joined
Mar 27, 2017
Messages
25
I finally solved the case. It took a while, but I noticed the problem was cron in a jail jamming /var/spool/clientmqueue with thousands of mails. Adding to /etc/rc.conf
Code:
sendmail_submit_enable="YES"
sendmail_msp_queue_enable="YES"

solved the problem.
 
Status
Not open for further replies.
Top