Arc doesnt consume all ram.

BlueChris

Cadet
Joined
Aug 11, 2021
Messages
4
Dear friends

Im a forever reader but never took the courage to post.

Here is my situation. My main Truenas Server where all my VMs are for my esxi servers doesn't want to consume all the memory that the machine has.
The machine and everything are superfast but in the life of me i cannot find why this is happening.

The Machine is this (in the case that my signature is not shown)
TrueNAS CORE 12.0-U5
Supermicro SuperServer 2028U-E1CNRT+
2 x E5-2660V3 - 40 Cores
256 GB of HP ECC Smart Memory (8x32GB@2133mhz)
1 x Supermicro SATA DOM 32GB (boot)
3 x LSI SAS 9300-8i in IT Mode, each connected to 2 of the 6 ports on the 2028U Backplane
1 x HPE Smart Array P822 Controller in HBA Mode that connects to the MSA P2000 Enclosure
1 x HPE Ethernet 10Gb 2-port 560SFP+
2 x Supermicro AOC-SLG3-2M NVME card with
4 x Intel Optane SSD DC P4801X Series 200GB, M.2
(80GB for Zil/Slog (20GB Partition On each one in Raid10))
4 x HPE P2000 3TB 6G SAS 7.2K rpm (3.5 in) MDL on an MSA P2000 3.5" Sas Enclosure (Data pool Raid10)
8 x Samsung PM1643a 1.92tb SAS3 SSD (Main VM Pool in Raid10)
6 x Samsung PM1643a 960gb SAS3 SSD (4 disks for Special VDev for Data Pool Raid10 and 2 disks for L2arc in VM Pool Striped)

The consumption of main memory stops at 160GB no matter what i do. I deleted all my tunable settings, rebooted, but nothing changed.
I don't have any jails or anything that needs that memory and always i am like this

# arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Wed Aug 11 16:39:34 2021
FreeBSD 12.2-RELEASE-p9 zpl version 5
Machine: truenas2.roisbrossa.local (amd64) spa version 5000

ARC status: HEALTHY
Memory throttle count: 0

ARC size (current): 71.3 % 164.2 GiB
Target size (adaptive): 71.8 % 165.4 GiB
Min size (hard limit): 3.5 % 8.0 GiB
Max size (high water): 28:1 230.3 GiB
Most Frequently Used (MFU) cache size: 64.8 % 105.1 GiB
Most Recently Used (MRU) cache size: 35.2 % 57.2 GiB
Metadata cache size (hard limit): 75.0 % 172.7 GiB
Metadata cache size (current): 1.6 % 2.7 GiB
Dnode cache size (hard limit): 10.0 % 17.3 GiB
Dnode cache size (current): 0.1 % 13.4 MiB



Here is my tunnables even though i had tested without them and nothing changes.
abd_chunk_size 4096

abd_scatter_enabled 1

allow_redacted_dataset_mount 0

anon_data_esize 0

anon_metadata_esize 0

anon_size 172716032

arc.average_blocksize 8192

arc.dnode_limit 0

arc.dnode_limit_percent 10

arc.dnode_reduce_percent 10

arc.evict_batch_limit 10

arc.eviction_pct 200

arc.grow_retry 0

arc.lotsfree_percent 10

arc.max 247230000000

arc.meta_adjust_restarts 4096

arc.meta_limit 0

arc.meta_limit_percent 75

arc.meta_min 0

arc.meta_prune 10000

arc.meta_strategy 1

arc.min 0

arc.min_prefetch_ms 0

arc.min_prescient_prefetch_ms 0

arc.p_dampener_disable 1

arc.p_min_shift 0

arc.pc_percent 0

arc.shrink_shift 0

arc.sys_free 0

arc_free_target 1392955

arc_max 247230000000

arc_min 0

arc_no_grow_shift 5

async_block_max_blocks 18446744073709551615

autoimport_disable 1

ccw_retry_interval 300

checksum_events_per_second 20

commit_timeout_pct 5

compressed_arc_enabled 1

condense.indirect_commit_entry_delay_ms 0

condense.indirect_obsolete_pct 25

condense.indirect_vdevs_enable 1

condense.max_obsolete_bytes 1073741824

condense.min_mapping_bytes 131072

condense_pct 200

crypt_sessions 0

dbgmsg_enable 1

dbgmsg_maxsize 4194304

dbuf.cache_shift 5

dbuf.metadata_cache_max_bytes 18446744073709551615

dbuf.metadata_cache_shift 6

dbuf_cache.hiwater_pct 10

dbuf_cache.lowater_pct 10

dbuf_cache.max_bytes 18446744073709551615

dbuf_state_index 0

ddt_data_is_special 1

deadman.checktime_ms 60000

deadman.enabled 1

deadman.failmode wait

deadman.synctime_ms 600000

deadman.ziotime_ms 300000

debug 0

debugflags 0

dedup.prefetch 0

default_bs 9

default_ibs 15

delay_min_dirty_percent 98

delay_scale 500000

dirty_data_max 4294967296

dirty_data_max_max 4294967296

dirty_data_max_max_percent 25

dirty_data_max_percent 10

dirty_data_sync_percent 95

disable_ivset_guid_check 0

dmu_object_alloc_chunk_shift 7

dmu_offset_next_sync 0

dmu_prefetch_max 134217728

dtl_sm_blksz 4096

flags 0

fletcher_4_impl [fastest] scalar superscalar superscalar4 sse2 ssse3 avx2

free_bpobj_enabled 1

free_leak_on_eio 0

free_min_time_ms 1000

history_output_max 1048576

immediate_write_sz 32768

initialize_chunk_size 1048576

initialize_value 16045690984833335022

keep_log_spacemaps_at_export 0

l2arc.feed_again 1

l2arc.feed_min_ms 200

l2arc.feed_secs 1

l2arc.headroom 2

l2arc.headroom_boost 200

l2arc.meta_percent 33

l2arc.mfuonly 0

l2arc.noprefetch 0

l2arc.norw 0

l2arc.rebuild_blocks_min_l2size 1073741824

l2arc.rebuild_enabled 1

l2arc.trim_ahead 0

l2arc.write_boost 40000000

l2arc.write_max 10000000

l2arc_feed_again 1

l2arc_feed_min_ms 200

l2arc_feed_secs 1

l2arc_headroom 2

l2arc_noprefetch 0

l2arc_norw 0

l2arc_write_boost 40000000

l2arc_write_max 10000000

l2c_only_size 0

livelist.condense.new_alloc 0

livelist.condense.sync_cancel 0

livelist.condense.sync_pause 0

livelist.condense.zthr_cancel 0

livelist.condense.zthr_pause 0

livelist.max_entries 500000

livelist.min_percent_shared 75

lua.max_instrlimit 100000000

lua.max_memlimit 104857600

max_async_dedup_frees 100000

max_auto_ashift 16

max_dataset_nesting 50

max_log_walking 5

max_logsm_summary_length 10

max_missing_tvds 0

max_missing_tvds_cachefile 2

max_missing_tvds_scan 0

max_nvlist_src_size 0

max_recordsize 1048576

metaslab.aliquot 524288

metaslab.bias_enabled 1

metaslab.debug_load 0

metaslab.debug_unload 0

metaslab.df_alloc_threshold 131072

metaslab.df_free_pct 4

metaslab.df_max_search 16777216

metaslab.df_use_largest_segment 0

metaslab.force_ganging 16777217

metaslab.fragmentation_factor_enabled 1

metaslab.fragmentation_threshold 70

metaslab.lba_weighting_enabled 1

metaslab.load_pct 50

metaslab.max_size_cache_sec 3600

metaslab.mem_limit 75

metaslab.preload_enabled 1

metaslab.preload_limit 10

metaslab.segment_weight_enabled 1

metaslab.sm_blksz_no_log 16384

metaslab.sm_blksz_with_log 131072

metaslab.switch_threshold 2

metaslab.unload_delay 32

metaslab.unload_delay_ms 600000

mfu_data_esize 109139994624

mfu_ghost_data_esize 60923720192

mfu_ghost_metadata_esize 545217536

mfu_ghost_size 61468937728

mfu_metadata_esize 435575296

mfu_size 112901632512

mg.fragmentation_threshold 95

mg.noalloc_threshold 0

min_auto_ashift 12

min_metaslabs_to_flush 1

mru_data_esize 57554265600

mru_ghost_data_esize 112022636544

mru_ghost_metadata_esize 621830656

mru_ghost_size 112644467200

mru_metadata_esize 143611904

mru_size 61415344640

multihost.fail_intervals 10

multihost.history 0

multihost.import_intervals 20

multihost.interval 1000

multilist_num_sublists 0

no_scrub_io 0

no_scrub_prefetch 0

nocacheflush 0

nopwrite_enabled 1

obsolete_min_time_ms 500

pd_bytes_max 52428800

per_txg_dirty_frees_percent 5

prefetch.array_rd_sz 1048576

prefetch.disable 0

prefetch.max_distance 33554432

prefetch.max_idistance 67108864

prefetch.max_streams 8

prefetch.min_sec_reap 2

read_history 0

read_history_hits 0

rebuild_max_segment 1048576

reconstruct.indirect_combinations_max 4096

recover 0

recv.queue_ff 20

recv.queue_length 16777216

recv.write_batch_size 1048576

reference_tracking_enable 0

removal_suspend_progress 0

remove_max_segment 16777216

resilver_disable_defer 0

resilver_min_time_ms 3000

scan_checkpoint_intval 7200

scan_fill_weight 3

scan_ignore_errors 0

scan_issue_strategy 0

scan_legacy 0

scan_max_ext_gap 2097152

scan_mem_lim_fact 20

scan_mem_lim_soft_fact 20

scan_strict_mem_lim 0

scan_suspend_progress 0

scan_vdev_limit 4194304

scrub_min_time_ms 1000

send.corrupt_data 0

send.no_prefetch_queue_ff 20

send.no_prefetch_queue_length 1048576

send.override_estimate_recordsize 0

send.queue_ff 20

send.queue_length 16777216

send.unmodified_spill_blocks 1

send_holes_without_birth_time 1

slow_io_events_per_second 20

spa.asize_inflation 24

spa.discard_memory_limit 16777216

spa.load_print_vdev_tree 0

spa.load_verify_data 1

spa.load_verify_metadata 1

spa.load_verify_shift 4

spa.slop_shift 5

space_map_ibs 14

special_class_metadata_reserve_pct 25

standard_sm_blksz 131072

super_owner 0

sync_pass_deferred_free 2

sync_pass_dont_compress 8

sync_pass_rewrite 2

sync_taskq_batch_pct 75

top_maxinflight 1000

traverse_indirect_prefetch_limit 32

trim.extent_bytes_max 134217728

trim.extent_bytes_min 32768

trim.metaslab_skip 0

trim.queue_limit 10

trim.txg_batch 128

txg.history 100

txg.timeout 75

unflushed_log_block_max 262144

unflushed_log_block_min 1000

unflushed_log_block_pct 400

unflushed_max_mem_amt 1073741824

unflushed_max_mem_ppm 1000

user_indirect_is_special 1

validate_skip 0

vdev.aggregate_trim 0

vdev.aggregation_limit 1048576

vdev.aggregation_limit_non_rotating 131072

vdev.async_read_max_active 10

vdev.async_read_min_active 1

vdev.async_write_active_max_dirty_percent 60

vdev.async_write_active_min_dirty_percent 30

vdev.async_write_max_active 5

vdev.async_write_min_active 1

vdev.bio_delete_disable 0

vdev.bio_flush_disable 0

vdev.cache_bshift 16

vdev.cache_max 16384

vdev.cache_size 0

vdev.def_queue_depth 128

vdev.default_ms_count 200

vdev.default_ms_shift 29

vdev.file.logical_ashift 9

vdev.file.physical_ashift 9

vdev.initializing_max_active 1

vdev.initializing_min_active 1

vdev.max_active 1000

vdev.max_auto_ashift 16

vdev.min_auto_ashift 12

vdev.min_ms_count 16

vdev.mirror.non_rotating_inc 0

vdev.mirror.non_rotating_seek_inc 1

vdev.mirror.rotating_inc 0

vdev.mirror.rotating_seek_inc 5

vdev.mirror.rotating_seek_offset 1048576

vdev.ms_count_limit 131072

vdev.nia_credit 5

vdev.nia_delay 5

vdev.queue_depth_pct 1000

vdev.read_gap_limit 32768

vdev.rebuild_max_active 3

vdev.rebuild_min_active 1

vdev.removal_ignore_errors 0

vdev.removal_max_active 2

vdev.removal_max_span 32768

vdev.removal_min_active 1

vdev.removal_suspend_progress 0

vdev.remove_max_segment 16777216

vdev.scrub_max_active 3

vdev.scrub_min_active 1

vdev.sync_read_max_active 10

vdev.sync_read_min_active 10

vdev.sync_write_max_active 10

vdev.sync_write_min_active 10

vdev.trim_max_active 2

vdev.trim_min_active 1

vdev.validate_skip 0

vdev.write_gap_limit 0

version.acl 1

version.ioctl 15

version.module v2021071201-zfs_f7ba541d64cbc60b21507bd7781331bea1abb12e

version.spa 5000

version.zpl 5

vnops.read_chunk_size 1048576

vol.mode 2

vol.recursive 0

vol.unmap_enabled 1

zap_iterate_prefetch 1

zevent.cols 80

zevent.console 0

zevent.len_max 512

zevent.retain_expire_secs 900

zevent.retain_max 2000

zfetch.max_distance 33554432

zfetch.max_idistance 67108864

zil.clean_taskq_maxalloc 1048576

zil.clean_taskq_minalloc 1024

zil.clean_taskq_nthr_pct 100

zil.maxblocksize 131072

zil.nocacheflush 0

zil.replay_disable 0

zil.slog_bulk 786432

zio.deadman_log_all 0

zio.dva_throttle_enabled 1

zio.exclude_metadata 0

zio.requeue_io_start_cut_in_line 1

zio.slow_io_ms 30000

zio.taskq_batch_pct 80

zio.taskq_batch_tpq 0

zio.use_uma 1

If anyone has any clue why this is huppening i will be thankfull and if anything else is needed pleast tell me to do it

Thx
Chris
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Silly question: Are you accessing more than 160 GB (compressed) of on-disk data?
 

BlueChris

Cadet
Joined
Aug 11, 2021
Messages
4
Silly question: Are you accessing more than 160 GB (compressed) of on-disk data?
Yes. The main pool with the 8 x 2tb ssd with raid 10 has almost 2tb of vm's (i use zstd-7 compression with a ratio is 1.8).
Maybe the 2x1tb in stripe for l2arc consume the free memory? But even if this is the case i get 70gb free memory in dashboard.
I believe i need to alter something via a tunable that i don't know. I had put the arc max to 230gb but nothing changed.
 

BlueChris

Cadet
Joined
Aug 11, 2021
Messages
4
I just tried to copy data in the pool that has the VM's and still the arc usage doesnt grow.
Im out of ideas.
 

BlueChris

Cadet
Joined
Aug 11, 2021
Messages
4
Can someone help please? If something else is needed from my config please tell me and i will do it.
 
Top