slushieken
Dabbler
- Joined
- May 28, 2014
- Messages
- 24
winbindd[6530]: failed to munlock memory: Cannot allocate memory (12)
Also errors of failed to mlock memory.
This is occurring on my NAS constantly now. Right after reboot as well.
Research shows that this is a problem not with the erroring service but with the ability of the erroring service to get the 'wire' or wire the memory for its work. I don't know what wired memory is yet so forgive my explanation.
This seems to be blocking activity. It seems to disconnect/log off my user from AFP (Apple Filing Protocol) Time Machine shares. This leaves my time machine backup files in an open state on the server, and the backup client then reports the backup files are already in use and therefore it cannot access them. I reset AFP or reboot to clear that state, rinse/repeat.
I am not sure what to provide or how much detail. I am guessing this is a bug and I need to submit it but I have not done that before (ever) so asking here to 1. be sure that is what I have and 2. hoping someone can help fix it.
The basics:
BuildFreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)
PlatformAMD Athlon(tm) II X3 400e Processor
Memory16324MB (ECC memory - passes all checks)
The following was set via autotune. These are my only non vanilla settings:
vfs.zfs.arc_max 10584279705
vm.kmem_size 11760310784
vm.kmem_size_max 14700388480
5x500gb disks in raidz1 array - total size 2TB, 1.7TB available
https://bugs.freenas.org/issues/3842 is my problem. Adjusting memory settings did not help there. Says it was fixed in earlier release without any details given.
arc_summary.py shows all memory and performance within norms.
I have very slowly built this system at home step by step working out problems as I find them. So at this point I have not configured anything more than the Time Machine AFP share and backups. I use a simple windows 2012 server with 2 users Active Directory Domain for authentication. That part is working well. I did notice that I did not see my user accounts when setting initial permissions when I created the share in the GUI. So I set those via command line instead, which worked fine. I have seen indications that the 2 probs are related.
This is my chance to also say Thanks to all of you for the help you give and your time. Off and on over my life I have helped out in forums and etc answering questions; but the way some people stay around helping others for months and years on a project they are not even paid for just amazes me. You are truly a different breed IMHO. It is people like you that make open source possible and your dedication is an inspiration to me.
Also errors of failed to mlock memory.
This is occurring on my NAS constantly now. Right after reboot as well.
Research shows that this is a problem not with the erroring service but with the ability of the erroring service to get the 'wire' or wire the memory for its work. I don't know what wired memory is yet so forgive my explanation.
This seems to be blocking activity. It seems to disconnect/log off my user from AFP (Apple Filing Protocol) Time Machine shares. This leaves my time machine backup files in an open state on the server, and the backup client then reports the backup files are already in use and therefore it cannot access them. I reset AFP or reboot to clear that state, rinse/repeat.
I am not sure what to provide or how much detail. I am guessing this is a bug and I need to submit it but I have not done that before (ever) so asking here to 1. be sure that is what I have and 2. hoping someone can help fix it.
The basics:
BuildFreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)
PlatformAMD Athlon(tm) II X3 400e Processor
Memory16324MB (ECC memory - passes all checks)
The following was set via autotune. These are my only non vanilla settings:
vfs.zfs.arc_max 10584279705
vm.kmem_size 11760310784
vm.kmem_size_max 14700388480
5x500gb disks in raidz1 array - total size 2TB, 1.7TB available
https://bugs.freenas.org/issues/3842 is my problem. Adjusting memory settings did not help there. Says it was fixed in earlier release without any details given.
arc_summary.py shows all memory and performance within norms.
Code:
[root@vault] ~# arc_summary.py
System Memory:
1.76% 278.83 MiB Active, 2.53% 400.20 MiB Inact
56.49% 8.72 GiB Wired, 0.02% 3.58 MiB Cache
39.18% 6.05 GiB Free, 0.00% 768.00 KiB Gap
Real Installed: 16.00 GiB
Real Available: 99.64% 15.94 GiB
Real Managed: 96.87% 15.44 GiB
Logical Total: 16.00 GiB
Logical Used: 59.72% 9.55 GiB
Logical Free: 40.28% 6.45 GiB
Kernel Memory: 193.50 MiB
Data: 88.37% 170.99 MiB
Text: 11.63% 22.51 MiB
Kernel Memory Map: 10.91 GiB
Size: 75.07% 8.19 GiB
Free: 24.93% 2.72 GiB
Page: 1
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Storage pool Version: 5000
Filesystem Version: 5
Memory Throttle Count: 0
ARC Misc:
Deleted: 352.34k
Recycle Misses: 3.52k
Mutex Misses: 0
Evict Skips: 0
ARC Size: 82.42% 8.12 GiB
Target Size: (Adaptive) 82.42% 8.12 GiB
Min Size (Hard Limit): 12.50% 1.23 GiB
Max Size (High Water): 8:1 9.86 GiB
ARC Size Breakdown:
Recently Used Cache Size: 93.81% 7.62 GiB
Frequently Used Cache Size: 6.19% 515.08 MiB
ARC Hash Breakdown:
Elements Max: 105.75k
Elements Current: 100.00% 105.75k
Collisions: 173.30k
Chain Max: 6
Chains: 16.36k
Page: 2
------------------------------------------------------------------------
ARC Total accesses: 600.39k
Cache Hit Ratio: 97.50% 585.39k
Cache Miss Ratio: 2.50% 15.00k
Actual Hit Ratio: 67.61% 405.90k
Data Demand Efficiency: 96.93% 193.96k
Data Prefetch Efficiency: 18.76% 4.63k
CACHE HITS BY CACHE LIST:
Anonymously Used: 30.10% 176.18k
Most Recently Used: 19.22% 112.52k
Most Frequently Used: 50.12% 293.38k
Most Recently Used Ghost: 0.42% 2.46k
Most Frequently Used Ghost: 0.15% 858
CACHE HITS BY DATA TYPE:
Demand Data: 32.12% 188.00k
Prefetch Data: 0.15% 869
Demand Metadata: 37.22% 217.89k
Prefetch Metadata: 30.51% 178.62k
CACHE MISSES BY DATA TYPE:
Demand Data: 39.71% 5.96k
Prefetch Data: 25.08% 3.76k
Demand Metadata: 12.20% 1.83k
Prefetch Metadata: 23.01% 3.45k
Page: 3
------------------------------------------------------------------------
Page: 4
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)DMU Efficiency: 3.78m
Hit Ratio: 95.81% 3.62m
Miss Ratio: 4.19% 158.34k
Colinear: 158.34k
Hit Ratio: 0.03% 45
Miss Ratio: 99.97% 158.29k
Stride: 3.62m
Hit Ratio: 99.99% 3.62m
Miss Ratio: 0.01% 248
DMU Misc:
Reclaim: 158.29k
Successes: 2.55% 4.03k
Failures: 97.45% 154.26k
Streams: 7.48k
+Resets: 0.71% 53
-Resets: 99.29% 7.42k
Bogus: 0
Page: 5
------------------------------------------------------------------------
Page: 6
------------------------------------------------------------------------
ZFS Tunable (sysctl):
kern.maxusers 384
vm.kmem_size 11760310784
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 14700388480
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 1297345024
vfs.zfs.mfu_ghost_metadata_lsize 698368
vfs.zfs.mfu_ghost_size 1298043392
vfs.zfs.mfu_data_lsize 441823744
vfs.zfs.mfu_metadata_lsize 2333184
vfs.zfs.mfu_size 444617728
vfs.zfs.mru_ghost_data_lsize 420366848
vfs.zfs.mru_ghost_metadata_lsize 90061312
vfs.zfs.mru_ghost_size 510428160
vfs.zfs.mru_data_lsize 8126392320
vfs.zfs.mru_metadata_lsize 40626688
vfs.zfs.mru_size 8204063232
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 16384
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 2646069926
vfs.zfs.arc_meta_used 155423568
vfs.zfs.arc_min 1323034963
vfs.zfs.arc_max 10584279705
vfs.zfs.dedup.prefetch 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.write_to_degraded 0
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.mg_alloc_failures 8
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.weight_factor_enable 0
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 10485760
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.gang_bang 131073
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.recover 0
vfs.zfs.txg.timeout 5
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.larger_ashift_minimal 0
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.trim_max_pending 64
vfs.zfs.vdev.trim_max_bytes 2147483648
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.use_uma 1
vfs.zfs.snapshot_list_prefetch 0
vfs.zfs.version.ioctl 3
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.trim.enabled 1
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
Page: 7
------------------------------------------------------------------------
[root@vault] ~#
I have very slowly built this system at home step by step working out problems as I find them. So at this point I have not configured anything more than the Time Machine AFP share and backups. I use a simple windows 2012 server with 2 users Active Directory Domain for authentication. That part is working well. I did notice that I did not see my user accounts when setting initial permissions when I created the share in the GUI. So I set those via command line instead, which worked fine. I have seen indications that the 2 probs are related.
This is my chance to also say Thanks to all of you for the help you give and your time. Off and on over my life I have helped out in forums and etc answering questions; but the way some people stay around helping others for months and years on a project they are not even paid for just amazes me. You are truly a different breed IMHO. It is people like you that make open source possible and your dedication is an inspiration to me.