Jail resource limits

Status
Not open for further replies.

Kam

Dabbler
Joined
Mar 28, 2016
Messages
39
I want to run more jails for things like web server, but want to assure it will not consume too much OS resources. I found that freebsd has resource limits implemented using "rctl"
https://www.freebsd.org/doc/handbook/security-resourcelimits.html

On latest freenas 9.10 when trying to use rctl I got:

[root@freenas] ~# rctl
rctl: RACCT/RCTL present, but disabled; enable using kern.racct.enable=1 tunable

But after adding "loader" tunable kern.racct.enable=1, rctl works!

I can add resource limits for memory use, max processes, pcpu used and similar for specific jail. Tested it and it works, limiting max processes gives "no more processes" message in jail when trying to start new shell, limiting CPU caused to control cpu used around specified limit for processes in jail.
To enable automatic loading of resource limits, I can add this into jail-post-start script for specific jail.

I want to ask here, if you see any risk enabling resource limits by kern.racct.enable=1 on freenas 9.10.
Is it safe to use it with jails and plugins, when setting only jail:xxx resource limits?

Thanks
 
Last edited:
D

dlavigne

Guest
I want to ask here, if you see any risk enabling resource limits by kern.racct.enable=1 on freenas 9.10.

I don't, seeing that it is designed to limit resources...
 

Kam

Dabbler
Joined
Mar 28, 2016
Messages
39
Thanks for reply.

If someone would like to use it with freenas, I will give my quick how to here.

- add new tunable parameter and reboot freenas

In System -> Tunables -> Add Tunable
Variable; kern.racct.enable
Value: 1
Type: Loader
Comment: enable resource control
Enabled: checked

- check if rctl works, should get no errors from

# rctl

- edit jail-post-start script for your jail (here websrv) and add at the end

# vi /mnt/zpool/jails/.websrv.meta/jail-post-start

jail_post_start "${JAILNAME}"

devfs -m /mnt/zpool/jails/${JAILNAME}/dev rule -s 4 applyset
rctl -a jail:${JAILNAME}:memoryuse:deny=512M
rctl -a jail:${JAILNAME}:maxproc:deny=100
rctl -a jail:${JAILNAME}:openfiles:deny=1024
rctl -a jail:${JAILNAME}:pcpu:deny=50

devfs call apply ruleset to /dev to hide devices which should not be accessible in jail due to security issues. change path of your jails in example.
rctl calls here are example to limit memory, processes, openfiles or percentage of cpu for the jail.
this is set each time jail is started.

- edit jal-post-stop for your jail and add

# vi /mnt/zpool/jails/.websrv.meta/jail-post-stop

jail_post_stop "${JAILNAME}"

rctl -r jail:${JAILNAME}

this will remove resource limits each time jail is stopped

- to check active resoure limits:
[root@freenas] ~# rctl jail:websrv
jail:websrv:pcpu:deny=50
jail:websrv:openfiles:deny=1024
jail:websrv:maxproc:deny=100
jail:websrv:memoryuse:deny=536870912

and check current usage:

[root@freenas] ~# rctl -hu jail:websrv
cputime=110
datasize=572K
stacksize=0
coredumpsize=0
memoryuse=181M
memorylocked=0
maxproc=13
openfiles=520
vmemoryuse=1413M
pseudoterminals=0
swapuse=0
nthr=13
msgqqueued=0
msgqsize=0
nmsgq=0
nsem=0
nsemop=0
nshm=0
shmsize=0
wallclock=964K
pcpu=0

Maybe this could be implemented also in GUI for the jail, possibility to specify resource limits and devfs rules during jail creation or to set/edit later.
 
D

dlavigne

Guest
Maybe this could be implemented also in GUI for the jail, possibility to specify resource limits and devfs rules during jail creation or to set/edit later.

Unfortunately that won't happen as 9.x jails are in feature freeze and 10 implements virtualization differently.
 

Dotty

Contributor
Joined
Dec 10, 2016
Messages
125
If someone would like to use it with freenas, I will give my quick how to here.
Tried your method,, seems to work great,, but when the Jail pushes real hard for resources, my whole freeNAS box panics.
I tried leaving only rctl -a jail:${JAILNAME}:pcpu:deny=50 , but the same, as soon as I run a heavy process in the Jail, the freeNAS crashes altogether.
Any ideas on how to troubleshoot that?
 
Status
Not open for further replies.
Top