Dang. There goes my hypothesis.i have no jails
I wonder if it is a memory leak, or inaccurate reporting / interpretation by the GUI (middleware)?This is only partially improved by restarting middlewared and not at all by restarting collectd.
last pid: 27660; load averages: 0.20, 0.26, 0.23 up 0+22:49:04 11:25:03
72 processes: 1 running, 71 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 978M Active, 516M Inact, 26G Wired, 3845M Free
ARC: 18G Total, 16G MFU, 617M MRU, 8704K Anon, 65M Header, 903M Other
16G Compressed, 17G Uncompressed, 1.07:1 Ratio
Swap: 20G Total, 20G Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
27422 root 22 27 0 380M 261M kqread 6 0:07 0.00% python3.9
27496 root 3 20 0 200M 166M usem 3 0:02 0.00% python3.9
27458 root 3 20 0 199M 165M usem 0 0:02 0.00% python3.9
27494 root 3 20 0 196M 163M piperd 5 0:02 0.00% python3.9
27495 root 3 20 0 196M 162M usem 0 0:02 0.00% python3.9
24764 winnie 11 20 0 213M 123M select 4 0:05 0.22% qbittorrent-nox
24124 root 1 20 0 129M 110M kqread 3 0:00 0.00% smbd
24135 root 1 20 0 127M 109M kqread 0 0:00 0.00% smbd
24137 root 1 20 0 127M 109M kqread 2 0:00 0.00% smbd
25530 plex 17 52 0 207M 80M uwait 7 0:02 0.04% Plex Media Server
24134 root 1 20 0 83M 64M kqread 3 0:00 0.00% winbindd
25567 plex 12 52 15 131M 60M piperd 4 0:02 0.01% Plex Script Host
27466 root 5 20 0 76M 57M usem 2 0:00 0.00% python3.9
27484 root 1 52 0 60M 50M zevent 4 0:00 0.00% python3.9
12831 root 11 20 0 82M 42M nanslp 6 3:33 0.00% collectd
24144 root 1 20 0 44M 25M kqread 5 0:00 0.00% winbindd
24133 root 1 20 0 43M 25M kqread 6 0:00 0.00% winbindd
25600 plex 11 20 0 48M 24M uwait 7 0:01 0.01% Plex Tuner Service
27424 root 1 26 0 22M 12M piperd 7 0:00 0.00% python3.9
1403 root 8 20 0 44M 12M select 3 0:49 0.00% rrdcached
1136 root 2 20 0 28M 11M kqread 1 0:02 0.00% syslog-ng
1252 root 1 -52 r0 11M 11M nanslp 1 0:00 0.00% watchdogd
14215 www 1 20 0 37M 11M kqread 6 0:00 0.00% nginx
26660 root 1 20 0 19M 9012K select 4 0:00 0.00% sshd
1415 root 1 20 0 35M 8976K pause 4 0:00 0.00% nginx
1411 root 1 20 0 18M 8288K select 5 0:00 0.00% sshd
1135 root 1 52 0 19M 8132K wait 7 0:00 0.00% syslog-ng
1338 ntpd 1 20 0 18M 6740K select 5 0:02 0.00% ntpd
434 root 1 20 0 17M 6348K select 6 0:00 0.00% zfsd
23588 root 1 20 0 15M 5756K nanslp 5 0:00 0.00% smartd
26694 root 1 20 0 13M 4628K pause 2 0:00 0.00% zsh
27644 root 1 20 0 13M 3980K CPU1 1 0:00 0.03% top
1421 messagebus 1 52 0 12M 3696K select 7 0:00 0.00% dbus-daemon
24111 root 1 52 0 12M 3352K select 0 0:00 0.00% rsync
26693 root 1 21 0 12M 3240K wait 7 0:00 0.00% su
1358 uucp 1 20 0 12M 3048K select 1 0:02 0.00% usbhid-ups
25380 _dhcp 1 28 0 12M 2932K select 6 0:00 0.00% dhclient
24645 _dhcp 1 21 0 12M 2932K select 4 0:00 0.00% dhclient
1360 uucp 1 20 0 35M 2880K select 0 0:02 0.00% upsd
1009 _dhcp 1 20 0 11M 2840K select 6 0:00 0.00% dhclient
1363 uucp 1 20 0 11M 2828K nanslp 1 0:00 0.00% upslog
1368 uucp 1 20 0 11M 2824K nanslp 5 0:01 0.00% upsmon
1366 root 1 52 0 11M 2816K piperd 7 0:00 0.00% upsmonSame issue with 384 GiB RAM. This weekend I enabled NFS to backup some config files for other systems, and this is a new development. The system is about 2 months old, and previously the graph was showing nearly all ZFS Cache - less than 10 GiB for Services.
The system is primarily used for VMware over iSCSI. After the NFS backup, I zeroed the free space on a 250 GiB thin provisioned drive and ran the VMware hole punching process to reclaim the blocks. The ARC Cache should have increased with all those iSCSI blocks, but didn't appear to. I think something is really reserving that RAM.
Nothing in htop or top comes close to 186 GiB.
The 177 GiB ZFS Cache from the graph matches what top shows for the ARC:
Feb 23 20:14:29 truenas 1 2022-02-23T20:14:29.983691+01:00 truenas.xxx.de collectd 79701 - - write_graphite plugin: Connecting to 192.168.xxx.xxx:2003 via tcp failed. The last error was: failed to connect to remote host: Operation timed outDoes not work. write_graphite plugin still wants to connect to a nonexistent host.Testing it with 0.0.0.0 as the remote (it's the standard I believe) for a day or two and will check if the error stays away...
I would try a fresh install of U8, create a samba share and try to replicate the issue, see if this changes something, so you can look at your config.Тhere is obviously a problem. Аfter upgrading to U8 ТN behaves strangely. First no longer uses all memory, second when i copy a larger file from my PC to TN with NFS or Samba, free memory increases and zfs ARC decreases and wired all the time is 135GB instead 181GB.Тhe upgrade was made on 17.02.2022 and from that moment all the problems began. I tried everything, deleting tunables and so on but problem exsist and performance are decreasing. After I returned to the U7 the system returned to normal.
View attachment 53637 View attachment 53634
View attachment 53635
I solved the problem. Namely, the problem is in the layout of the memory chips distributed for each processor separately. I do not know for what reasons but ZFS looks at the amount of memory for each processor separately and determines the amount for ARC according to the least allocated memory. In my case it was asymmetric for processor 1 was 128GB for processor 2 64GB, after I arranged them symmetrically 2 x 96GB the system works perfectly.Тhere is obviously a problem. Аfter upgrading to U8 ТN behaves strangely. First no longer uses all memory, second when i copy a larger file from my PC to TN with NFS or Samba, free memory increases and zfs ARC decreases and wired all the time is 135GB instead 181GB.Тhe upgrade was made on 17.02.2022 and from that moment all the problems began. I tried everything, deleting tunables and so on but problem exsist and performance are decreasing. After I returned to the U7 the system returned to normal.
View attachment 53637 View attachment 53634
View attachment 53635
View attachment 55242
Same here. After a reboot the RAM usage sits at sub 10GB for Services and then slowly ramps up over time. I'm on TrueNAS-12.0-U8.1.