ARC Size much lower than system memory

Status
Not open for further replies.
Joined
Jan 19, 2016
Messages
2
Hi

I have noticed that the ARC target size is about 22G on my system which ownes 32GB RAM.
ARC has filled up to about 23GB, but where is the difference gone?

Mem: 884M Active, 177M Inact, 24G Wired, 5646M Free
ARC: 23G Total, 8851M MFU, 14G MRU, 162K Anon, 217M Header, 133M Other

time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
20:15:41 0 0 0 0 0 0 0 0 0 22G 23G

Is all of the memory-difference eaten up by samba?

USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
root 11 188.0 0.0 0 32 ?? RL 2Jan16 43770:55.01 [idle]
root 79667 1.4 0.1 67504 21740 ?? S 8:16PM 0:00.16 dtrace -n \n#pragma D option quiet\n inline int OPT_time = 0
root 79650 0.5 0.1 67504 21740 ?? S 8:16PM 0:00.16 dtrace -n \n#pragma D option quiet\n inline int OPT_time = 0
root 0 0.0 0.0 0 8144 ?? DLs 2Jan16 234:30.58 [kernel]
root 1 0.0 0.0 6276 564 ?? ILs 2Jan16 0:00.14 /sbin/init --
root 2 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [crypto]
root 3 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [crypto returns]
root 4 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [mpt_recovery0]
root 5 0.0 0.0 0 48 ?? DL 2Jan16 0:00.75 [ctl]
root 6 0.0 0.0 0 160 ?? DL 2Jan16 41:44.91 [zfskern]
root 7 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [xpt_thrd]
root 8 0.0 0.0 0 16 ?? DL 2Jan16 0:29.91 [enc_daemon0]
root 9 0.0 0.0 0 16 ?? DL 2Jan16 0:01.24 [pagedaemon]
root 10 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [audit]
root 12 0.0 0.0 0 224 ?? WL 2Jan16 156:32.45 [intr]
root 13 0.0 0.0 0 32 ?? DL 2Jan16 0:00.00 [ng_queue]
root 14 0.0 0.0 0 48 ?? DL 2Jan16 29:27.21 [geom]
root 15 0.0 0.0 0 16 ?? DL 2Jan16 1:24.54 [yarrow]
root 16 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [vmdaemon]
root 17 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [pagezero]
root 18 0.0 0.0 0 16 ?? DL 2Jan16 0:05.46 [bufdaemon]
root 19 0.0 0.0 0 16 ?? DL 2Jan16 6:42.10 [syncer]
root 20 0.0 0.0 0 16 ?? DL 2Jan16 0:06.13 [vnlru]
root 21 0.0 0.0 0 16 ?? DL 2Jan16 0:08.72 [softdepflush]
root 246 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [g_mp_kt]
root 1118 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [ftcleanup]
root 1186 0.0 0.0 0 16 ?? IL 2Jan16 0:00.00 [HgfsKReqWorker]
root 2076 0.0 0.0 6280 1312 ?? Is 2Jan16 0:00.47 /sbin/devd
root 2620 0.0 0.0 35876 3308 ?? I 2Jan16 0:00.00 /usr/local/sbin/syslog-ng -p /var/run/syslog.pid
root 2653 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da13p1]
root 2654 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[1] da13p1]
root 2656 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da11p1]
root 2657 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da11p1]
root 2659 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da12p1]
root 2660 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da12p1]
root 2662 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da26p1]
root 2663 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da26p1]
root 2665 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da14p1]
root 2666 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da14p1]
root 2668 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[0] da15p1]
root 2669 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[1] da15p1]
root 2671 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[0] da17p1]
root 2672 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da17p1]
root 2674 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da16p1]
root 2675 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da16p1]
root 2677 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[0] da25p1]
root 2678 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da25p1]
root 2680 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da8p1]
root 2681 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da8p1]
root 2683 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[0] da19p1]
root 2684 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[1] da19p1]
root 2686 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da6p1]
root 2687 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da6p1]
root 2689 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da9p1]
root 2690 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[1] da9p1]
root 2692 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da2p1]
root 2693 0.0 0.0 0 16 ?? DL 2Jan16 0:00.03 [g_eli[1] da2p1]
root 2695 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da1p1]
root 2696 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da1p1]
root 2698 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da3p1]
root 2699 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da3p1]
root 2701 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da4p1]
root 2702 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da4p1]
root 2704 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da5p1]
root 2705 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da5p1]
root 2707 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[0] da21p1]
root 2708 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da21p1]
root 2710 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[0] da27p1]
root 2711 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da27p1]
root 2713 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da22p1]
root 2714 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da22p1]
root 2716 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da23p1]
root 2717 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[1] da23p1]
root 2719 0.0 0.0 0 16 ?? DL 2Jan16 0:00.02 [g_eli[0] da24p1]
root 2720 0.0 0.0 0 16 ?? DL 2Jan16 0:00.01 [g_eli[1] da24p1]
root 2785 0.0 0.0 24668 3744 ?? Is 2Jan16 0:00.00 /usr/sbin/ctld
root 2851 0.0 0.0 37988 4296 ?? Ss 2Jan16 0:00.79 /usr/sbin/gssd
root 2877 0.0 0.0 18280 1964 ?? Ss 2Jan16 0:01.08 /usr/sbin/rpcbind -h 192.168.6.61
root 2881 0.0 0.0 28484 5612 ?? Is 2Jan16 0:00.01 /usr/sbin/mountd -l -rS -h 192.168.6.61 /etc/exports /etc/zfs/exports
root 2895 0.0 0.0 26348 5404 ?? Is 2Jan16 0:00.01 nfsd: master (nfsd)
root 2896 0.0 0.0 9916 1568 ?? I 2Jan16 0:01.53 nfsd: server (nfsd)
root 2907 0.0 0.0 286448 5472 ?? Ss 2Jan16 0:00.93 /usr/sbin/rpc.statd -h 192.168.6.61
root 2918 0.0 0.0 26428 5516 ?? Ss 2Jan16 0:01.73 /usr/sbin/rpc.lockd -h 192.168.6.61
root 2930 0.0 0.1 67596 17304 ?? S 2Jan16 11:48.94 /usr/local/bin/vmtoolsd -c /usr/local/share/vmware-tools/tools.conf -p /usr/local/lib/open-vm-to
root 2934 0.0 0.0 0 16 ?? DL 2Jan16 0:00.00 [Timer]
root 3005 0.0 0.1 134092 36532 ?? S 2Jan16 155:07.77 python: freenas-snmpd (python2.7)
root 3012 0.0 0.0 56696 8952 ?? S 2Jan16 24:16.32 /usr/local/sbin/snmpd -p /var/run/net_snmpd.pid -c /etc/local/snmpd.conf -Ls5d
root 3493 0.0 0.0 26384 4000 ?? Ss 2Jan16 0:47.50 /usr/sbin/ntpd -g -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift
nobody 3693 0.0 0.0 38508 5068 ?? Ss 2Jan16 0:22.17 proftpd: (accepting connections) (proftpd)
root 3885 0.0 0.0 28120 4788 ?? I 2Jan16 0:07.26 /usr/local/sbin/smartd -i 1800 -c /usr/local/etc/smartd.conf -p /var/run/smartd.pid
root 4035 0.0 0.0 30500 5200 ?? Is 2Jan16 0:00.01 nginx: master process /usr/local/sbin/nginx
root 4169 0.0 0.6 391204 207084 ?? I 2Jan16 2:18.68 /usr/local/bin/python -R /usr/local/www/freenasUI/manage.py runfcgi method=threaded host=127.0.0
root 4173 0.0 0.3 221780 86044 ?? S 2Jan16 20:18.13 python: alertd (python2.7)
messagebus 4183 0.0 0.0 18460 2312 ?? Is 2Jan16 0:00.00 /usr/local/bin/dbus-daemon --system
root 4251 0.0 0.1 166620 30796 ?? Ss 2Jan16 123:48.08 /usr/local/sbin/collectd
root 4411 0.0 0.0 53392 5688 ?? Is 2Jan16 0:00.02 /usr/sbin/sshd
root 4502 0.0 0.0 12056 1596 ?? Is 2Jan16 0:00.00 daemon: /usr/local/libexec/nas/register_mdns.py[70941] (daemon)
root 5467 0.0 0.0 18296 1896 ?? Is 2Jan16 0:03.21 /usr/sbin/cron -s
root 5672 0.0 0.0 45020 4472 ?? I 2Jan16 0:00.09 /sbin/zfsd -d zfsd
root 5699 0.0 0.0 73960 11068 ?? Is 2Jan16 0:22.25 /usr/local/sbin/syslog-ng -p /var/run/syslog.pid
www 55811 0.0 0.0 30500 5660 ?? I Mon12AM 0:00.22 nginx: worker process (nginx)
root 70884 0.0 0.0 213812 15120 ?? Ss 8:51AM 0:02.92 /usr/local/sbin/nmbd --daemon --configfile=/usr/local/etc/smb4.conf
root 70888 0.0 0.1 264980 20792 ?? Is 8:51AM 0:01.21 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
root 70892 0.0 0.1 246528 18064 ?? Is 8:51AM 0:00.05 /usr/local/sbin/winbindd --daemon --configfile=/usr/local/etc/smb4.conf
root 70893 0.0 0.1 253068 18224 ?? S 8:51AM 0:00.11 /usr/local/sbin/winbindd --daemon --configfile=/usr/local/etc/smb4.conf
nobody 70935 0.0 0.0 14064 2248 ?? Is 8:52AM 0:00.04 /usr/local/sbin/mdnsd
root 70936 0.0 0.1 258564 18580 ?? I 8:52AM 0:00.06 /usr/local/sbin/winbindd --daemon --configfile=/usr/local/etc/smb4.conf
root 70938 0.0 0.1 248612 18120 ?? I 8:52AM 0:00.05 /usr/local/sbin/winbindd --daemon --configfile=/usr/local/etc/smb4.conf
root 70941 0.0 0.2 182564 62768 ?? I 8:52AM 0:01.61 /usr/local/bin/python /usr/local/libexec/nas/register_mdns.py (python2.7)
root 71456 0.0 0.1 298632 21784 ?? I 8:54AM 0:00.21 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
root 79252 0.0 0.0 3784 1456 ?? IN 8:13PM 0:00.00 sleep 300
root 79373 0.0 0.0 76092 6280 ?? Ss 8:14PM 0:00.05 sshd: root@pts/0 (sshd)
root 79649 0.0 0.0 12068 3280 ?? Ss 8:16PM 0:00.01 /usr/local/bin/ksh93 -p /usr/local/bin/zilstat 10 1
root 79666 0.0 0.0 12068 3280 ?? Ss 8:16PM 0:00.01 /usr/local/bin/ksh93 -p /usr/local/bin/zilstat 5 1
root 79668 0.0 0.0 12068 3280 ?? Ss 8:16PM 0:00.01 /usr/local/bin/ksh93 -p /usr/local/bin/zilstat 1 1
root 79669 0.0 0.1 67504 21740 ?? S 8:16PM 0:00.16 dtrace -n \n#pragma D option quiet\n inline int OPT_time = 0
root 80616 0.0 0.1 344180 24956 ?? I 9:50AM 0:00.29 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
root 82979 0.0 0.1 110704 21108 ?? S 2Jan16 5:18.76 python: webshelld (python2.7)
root 3908 0.0 0.0 18604 3204 v0- IN 2Jan16 0:05.76 /bin/sh /usr/local/sbin/pbid
root 5664 0.0 0.2 178100 63388 v0 Is+ 2Jan16 0:01.92 python /etc/netcli (python2.7)
root 5665 0.0 0.0 12056 1644 v1 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv1
root 5666 0.0 0.0 12056 1644 v2 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv2
root 5667 0.0 0.0 12056 1644 v3 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv3
root 5668 0.0 0.0 12056 1644 v4 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv4
root 5669 0.0 0.0 12056 1644 v5 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv5
root 5670 0.0 0.0 12056 1644 v6 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv6
root 5671 0.0 0.0 12056 1644 v7 Is+ 2Jan16 0:00.00 /usr/libexec/getty Pc ttyv7
root 79383 0.0 0.0 21680 3960 0 Ss 8:14PM 0:00.05 -csh (csh)
root 79670 0.0 0.0 16264 1944 0 R+ 8:16PM 0:00.00 ps aux

If it's samba?! Why?

[global]
server max protocol = SMB2
encrypt passwords = yes
dns proxy = no
strict locking = no
oplocks = yes
deadtime = 15
max log size = 51200
max open files = 942932
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
getwd cache = yes
guest account = nobody
map to guest = Bad User
obey pam restrictions = yes
directory name cache size = 0
kernel change notify = no
panic action = /usr/local/libexec/samba/samba-backtrace
nsupdate command = /usr/local/bin/samba-nsupdate -g
ea support = yes
store dos attributes = yes
lm announce = yes
acl allow execute always = true
acl check permissions = true
dos filemode = yes
multicast dns register = no
domain logons = no
local master = no
idmap config *: backend = tdb
idmap config *: range = 90000001-100000000
server role = standalone
netbios name = DONKEY
workgroup = MD
security = user
pid directory = /var/run/samba
create mask = 0666
directory mask = 0777
client ntlmv2 auth = yes
dos charset = CP437
unix charset = UTF-8
log level = 2
socket options = TCP_NODELAY SO_KEEPALIVE SO_RCVBUF=131072 SO_SNDBUF=131072
read raw = yes
write raw = yes
#max xmit = 262144
max xmit = 12288
getwd cache = yes
write cache size = 262144
#aio read size = 16384
#aio write size = 16384
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no


[multimedia]
path = /mnt/pool01/multimedia
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
vfs objects = zfs_space zfsacl aio_pthread streams_xattr
hide dot files = yes
hosts allow = 10.15.5.0/24 10.15.6.0/24 10.222.0.1 192.168.7.10 192.168.7.100
guest ok = yes
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare
veto files = /Thumbs.db/Temporary Items/.DS_Store/.AppleDB/.TemporaryItems/.AppleDouble/.bin/.AppleDesktop/Network Trash Folder/.Spotlight/.Trashes/.fseventd/
delete veto files = yes
hide dot files = yes
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I have noticed that the ARC target size is about 22G on my system which ownes 32GB RAM.
Unless you have (stupidly) activated auto-tune, this should not be the case. FreeNAS should use as much RAM (minus about 1GB) as is available to it for ARC.

Please tell us the output of this:
Code:
 sysctl -a vfs.zfs.arc_max
 
Joined
Jan 19, 2016
Messages
2
vfs.zfs.arc_max: 29962594713

BTW:
vm.kmem_size: 42915440640

And how can I undo "Auto Tune"?
Just delete the flag and remove the created tuneabled?!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Actually, that arc_max looks right for 32GB of RAM sir.

I don't believe you should be capping out as 23GB as you said---if you let the FreeNAS continue to churn ARC, I would suspect it will go up to consume much of the available RAM.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
What else are you running? My ARC is at about 24/32GB, but I'm running a few jails, plus a VM in a VirtualBox jail. ARC drops further when I run multiple VMs.
 

ShimadaRiku

Contributor
Joined
Aug 28, 2015
Messages
104
Have you figured out why?

I also have 32GB but noticed arc max never goes above 23GB
 
Status
Not open for further replies.
Top