What to do with two m.2 slots?

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I recently upgraded the motherboard in my NAS, mainly to get a usable IPMI experience, but also because my previous X9 board was getting kind of long in the tooth. The new board is running fine, so woohoo. But it has two onboard m.2 slots that I'm not using, and I'm wondering what I might do with them.

Main usage of the system is SMB (mainly with macOS clients) and some NFS sharing (Linux clients) from a pool of spinners. I run apps on a mirrored pool of SATA SSDs. I don't use iSCSI, and I don't store VM images via NFS.

So, what can/should I do with two m.2 slots? I've thought of:
  • L2ARC
    • I'm not clear on how to interpret the output of arc_summary, but it's included below
  • SLOG
    • I don't think any of my usage involves sync writes, so this seems unlikely to benefit
  • Special VDEV
    • I'll admit I don't understand these, other than that it woud be an irreversible action since my pool consists of RAIDZ2 vdevs
  • Move the apps pool
  • ?
Any thoughts on this would be appreciated.

Code:
root@truenas[~]# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Wed Nov 22 08:52:40 2023
Linux 6.1.55-production+truenas                                2.2.0-rc4
Machine: truenas (x86_64)                                      2.2.0-rc4

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    99.8 %   89.9 GiB
        Target size (adaptive):                       100.0 %   90.0 GiB
        Min size (hard limit):                          4.4 %    3.9 GiB
        Max size (high water):                           22:1   90.0 GiB
        Anonymous data size:                          < 0.1 %    4.0 MiB
        Anonymous metadata size:                        0.3 %  272.9 MiB
        MFU data target:                               60.6 %   52.6 GiB
        MFU data size:                                 60.4 %   52.4 GiB
        MFU ghost data size:                                    16.4 GiB
        MFU metadata target:                            7.5 %    6.5 GiB
        MFU metadata size:                              4.8 %    4.1 GiB
        MFU ghost metadata size:                                 4.6 GiB
        MRU data target:                               24.9 %   21.7 GiB
        MRU data size:                                 24.9 %   21.7 GiB
        MRU ghost data size:                                    32.4 GiB
        MRU metadata target:                            7.0 %    6.1 GiB
        MRU metadata size:                              9.6 %    8.4 GiB
        MRU ghost metadata size:                                22.0 GiB
        Uncached data size:                             0.0 %    0 Bytes
        Uncached metadata size:                         0.0 %    0 Bytes
        Bonus size:                                     0.4 %  359.8 MiB
        Dnode cache target:                            10.0 %    9.0 GiB
        Dnode cache size:                              14.1 %    1.3 GiB
        Dbuf size:                                      0.6 %  544.3 MiB
        Header size:                                    0.9 %  872.0 MiB
        L2 header size:                                 0.0 %    0 Bytes
        ABD chunk waste size:                         < 0.1 %   16.4 MiB

ARC hash breakdown:
        Elements max:                                               5.2M
        Elements current:                              66.0 %       3.4M
        Collisions:                                                40.6M
        Chain max:                                                     6
        Chains:                                                   304.2k

ARC misc:
        Deleted:                                                   52.4M
        Mutex misses:                                             101.9k
        Eviction skips:                                           154.4M
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   6.2 TiB
        L2 eligible MFU evictions:                     18.4 %    1.1 TiB
        L2 eligible MRU evictions:                     81.6 %    5.0 TiB
        L2 ineligible evictions:                               525.4 GiB

ARC total accesses:                                                 9.6G
        Total hits:                                    99.5 %       9.5G
        Total I/O hits:                               < 0.1 %       1.3M
        Total misses:                                   0.5 %      49.0M

ARC demand data accesses:                              26.8 %       2.6G
        Demand data hits:                              99.6 %       2.6G
        Demand data I/O hits:                         < 0.1 %     552.5k
        Demand data misses:                             0.3 %       8.4M

ARC demand metadata accesses:                          71.7 %       6.9G
        Demand metadata hits:                          99.9 %       6.9G
        Demand metadata I/O hits:                     < 0.1 %     317.6k
        Demand metadata misses:                         0.1 %       3.6M

ARC prefetch metadata accesses:                         0.3 %      29.4M
        Prefetch data hits:                             6.7 %       2.0M
        Prefetch data I/O hits:                         0.8 %     249.3k
        Prefetch data misses:                          92.4 %      27.2M

ARC prefetch metadata accesses:                         1.2 %     117.1M
        Prefetch metadata hits:                        91.5 %     107.2M
        Prefetch metadata I/O hits:                     0.1 %     162.1k
        Prefetch metadata misses:                       8.3 %       9.7M

ARC predictive prefetches:                             23.1 %      33.8M
        Demand hits after predictive:                  82.7 %      28.0M
        Demand I/O hits after predictive:               2.1 %     717.3k
        Never demanded after predictive:               15.2 %       5.1M

ARC prescient prefetches:                              76.9 %     112.7M
        Demand hits after prescient:                    6.9 %       7.8M
        Demand I/O hits after prescient:              < 0.1 %      46.4k
        Never demanded after prescient:                93.1 %     104.9M

ARC states hits of all accesses:
        Most frequently used (MFU):                    94.6 %       9.1G
        Most recently used (MRU):                       4.8 %     462.6M
        Most frequently used (MFU) ghost:               0.1 %       5.9M
        Most recently used (MRU) ghost:               < 0.1 %       4.7M
        Uncached:                                       0.0 %          0

DMU predictive prefetcher calls:                                    1.2G
        Stream hits:                                    6.0 %      74.2M
        Stream misses:                                 94.0 %       1.2G
        Streams limit reached:                         91.8 %       1.1G
        Prefetches issued                                          30.0M

L2ARC not detected, skipping section

Solaris Porting Layer (SPL):
        spl_hostid                                                     0
        spl_hostid_path                                      /etc/hostid
        spl_kmem_alloc_max                                       8388608
        spl_kmem_alloc_warn                                        65536
        spl_kmem_cache_kmem_threads                                    4
        spl_kmem_cache_magazine_size                                   0
        spl_kmem_cache_max_size                                       32
        spl_kmem_cache_obj_per_slab                                    8
        spl_kmem_cache_reclaim                                         0
        spl_kmem_cache_slab_limit                                  16384
        spl_max_show_tasks                                           512
        spl_panic_halt                                                 1
        spl_schedule_hrtimeout_slack_us                                0
        spl_taskq_kick                                                 0
        spl_taskq_thread_bind                                          0
        spl_taskq_thread_dynamic                                       1
        spl_taskq_thread_priority                                      1
        spl_taskq_thread_sequential                                    4
        spl_taskq_thread_timeout_ms                                10000

Tunables:
        dbuf_cache_hiwater_pct                                        10
        dbuf_cache_lowater_pct                                        10
        dbuf_cache_max_bytes                        18446744073709551615
        dbuf_cache_shift                                               5
        dbuf_metadata_cache_max_bytes               18446744073709551615
        dbuf_metadata_cache_shift                                      6
        dbuf_mutex_cache_shift                                         0
        ddt_zap_default_bs                                            15
        ddt_zap_default_ibs                                           15
        dmu_object_alloc_chunk_shift                                   7
        dmu_prefetch_max                                       134217728
        icp_aes_impl                cycle [fastest] generic x86_64 aesni
        icp_gcm_avx_chunk_size                                     32736
        icp_gcm_impl               cycle [fastest] avx generic pclmulqdq
        ignore_hole_birth                                              1
        l2arc_exclude_special                                          0
        l2arc_feed_again                                               1
        l2arc_feed_min_ms                                            200
        l2arc_feed_secs                                                1
        l2arc_headroom                                                 2
        l2arc_headroom_boost                                         200
        l2arc_meta_percent                                            33
        l2arc_mfuonly                                                  0
        l2arc_noprefetch                                               1
        l2arc_norw                                                     0
        l2arc_rebuild_blocks_min_l2size                       1073741824
        l2arc_rebuild_enabled                                          1
        l2arc_trim_ahead                                               0
        l2arc_write_boost                                        8388608
        l2arc_write_max                                          8388608
        metaslab_aliquot                                         1048576
        metaslab_bias_enabled                                          1
        metaslab_debug_load                                            0
        metaslab_debug_unload                                          0
        metaslab_df_max_search                                  16777216
        metaslab_df_use_largest_segment                                0
        metaslab_force_ganging                                  16777217
        metaslab_force_ganging_pct                                     3
        metaslab_fragmentation_factor_enabled                          1
        metaslab_lba_weighting_enabled                                 1
        metaslab_preload_enabled                                       1
        metaslab_unload_delay                                         32
        metaslab_unload_delay_ms                                  600000
        send_holes_without_birth_time                                  1
        spa_asize_inflation                                           24
        spa_config_path                             /etc/zfs/zpool.cache
        spa_load_print_vdev_tree                                       0
        spa_load_verify_data                                           1
        spa_load_verify_metadata                                       1
        spa_load_verify_shift                                          4
        spa_slop_shift                                                 5
        spa_upgrade_errlog_limit                                       0
        vdev_file_logical_ashift                                       9
        vdev_file_physical_ashift                                      9
        vdev_removal_max_span                                      32768
        vdev_validate_skip                                             0
        zap_iterate_prefetch                                           1
        zap_micro_max_size                                        131072
        zfetch_max_distance                                     67108864
        zfetch_max_idistance                                    67108864
        zfetch_max_sec_reap                                            2
        zfetch_max_streams                                             8
        zfetch_min_distance                                      4194304
        zfetch_min_sec_reap                                            1
        zfs_abd_scatter_enabled                                        1
        zfs_abd_scatter_max_order                                     13
        zfs_abd_scatter_min_size                                    1536
        zfs_admin_snapshot                                             0
        zfs_allow_redacted_dataset_mount                               0
        zfs_arc_average_blocksize                                   8192
        zfs_arc_dnode_limit                                            0
        zfs_arc_dnode_limit_percent                                   10
        zfs_arc_dnode_reduce_percent                                  10
        zfs_arc_evict_batch_limit                                     10
        zfs_arc_eviction_pct                                         200
        zfs_arc_grow_retry                                             0
        zfs_arc_lotsfree_percent                                      10
        zfs_arc_max                                          96636764160
        zfs_arc_meta_balance                                         500
        zfs_arc_min                                                    0
        zfs_arc_min_prefetch_ms                                        0
        zfs_arc_min_prescient_prefetch_ms                              0
        zfs_arc_pc_percent                                             0
        zfs_arc_prune_task_threads                                     1
        zfs_arc_shrink_shift                                           0
        zfs_arc_shrinker_limit                                     10000
        zfs_arc_sys_free                                               0
        zfs_async_block_max_blocks                  18446744073709551615
        zfs_autoimport_disable                                         1
        zfs_blake3_impl   cycle [fastest] generic sse2 sse41 avx2 avx512
        zfs_brt_prefetch                                               1
        zfs_btree_verify_intensity                                     0
        zfs_checksum_events_per_second                                20
        zfs_commit_timeout_pct                                         5
        zfs_compressed_arc_enabled                                     1
        zfs_condense_indirect_commit_entry_delay_ms                    0
        zfs_condense_indirect_obsolete_pct                            25
        zfs_condense_indirect_vdevs_enable                             1
        zfs_condense_max_obsolete_bytes                       1073741824
        zfs_condense_min_mapping_bytes                            131072
        zfs_dbgmsg_enable                                              1
        zfs_dbgmsg_maxsize                                       4194304
        zfs_dbuf_state_index                                           0
        zfs_ddt_data_is_special                                        1
        zfs_deadman_checktime_ms                                   60000
        zfs_deadman_enabled                                            1
        zfs_deadman_failmode                                        wait
        zfs_deadman_synctime_ms                                   600000
        zfs_deadman_ziotime_ms                                    300000
        zfs_dedup_prefetch                                             0
        zfs_default_bs                                                 9
        zfs_default_ibs                                               15
        zfs_delay_min_dirty_percent                                   60
        zfs_delay_scale                                           500000
        zfs_delete_blocks                                          20480
        zfs_dirty_data_max                                    4294967296
        zfs_dirty_data_max_max                                4294967296
        zfs_dirty_data_max_max_percent                                25
        zfs_dirty_data_max_percent                                    10
        zfs_dirty_data_sync_percent                                   20
        zfs_disable_ivset_guid_check                                   0
        zfs_dmu_offset_next_sync                                       1
        zfs_embedded_slog_min_ms                                      64
        zfs_expire_snapshot                                          300
        zfs_fallocate_reserve_percent                                110
        zfs_flags                                                      0
        zfs_fletcher_4_impl [fastest] scalar superscalar superscalar4 sse2 ssse3 avx2 avx512f avx512bw
        zfs_free_bpobj_enabled                                         1
        zfs_free_leak_on_eio                                           0
        zfs_free_min_time_ms                                        1000
        zfs_history_output_max                                   1048576
        zfs_immediate_write_sz                                     32768
        zfs_initialize_chunk_size                                1048576
        zfs_initialize_value                        16045690984833335022
        zfs_keep_log_spacemaps_at_export                               0
        zfs_key_max_salt_uses                                  400000000
        zfs_livelist_condense_new_alloc                                0
        zfs_livelist_condense_sync_cancel                              0
        zfs_livelist_condense_sync_pause                               0
        zfs_livelist_condense_zthr_cancel                              0
        zfs_livelist_condense_zthr_pause                               0
        zfs_livelist_max_entries                                  500000
        zfs_livelist_min_percent_shared                               75
        zfs_lua_max_instrlimit                                 100000000
        zfs_lua_max_memlimit                                   104857600
        zfs_max_async_dedup_frees                                 100000
        zfs_max_dataset_nesting                                       50
        zfs_max_log_walking                                            5
        zfs_max_logsm_summary_length                                  10
        zfs_max_missing_tvds                                           0
        zfs_max_nvlist_src_size                                        0
        zfs_max_recordsize                                      16777216
        zfs_metaslab_find_max_tries                                  100
        zfs_metaslab_fragmentation_threshold                          70
        zfs_metaslab_max_size_cache_sec                             3600
        zfs_metaslab_mem_limit                                        25
        zfs_metaslab_segment_weight_enabled                            1
        zfs_metaslab_switch_threshold                                  2
        zfs_metaslab_try_hard_before_gang                              0
        zfs_mg_fragmentation_threshold                                95
        zfs_mg_noalloc_threshold                                       0
        zfs_min_metaslabs_to_flush                                     1
        zfs_multihost_fail_intervals                                  10
        zfs_multihost_history                                          0
        zfs_multihost_import_intervals                                20
        zfs_multihost_interval                                      1000
        zfs_multilist_num_sublists                                     0
        zfs_no_scrub_io                                                0
        zfs_no_scrub_prefetch                                          0
        zfs_nocacheflush                                               0
        zfs_nopwrite_enabled                                           1
        zfs_object_mutex_size                                         64
        zfs_obsolete_min_time_ms                                     500
        zfs_override_estimate_recordsize                               0
        zfs_pd_bytes_max                                        52428800
        zfs_per_txg_dirty_frees_percent                               30
        zfs_prefetch_disable                                           0
        zfs_read_history                                               0
        zfs_read_history_hits                                          0
        zfs_rebuild_max_segment                                  1048576
        zfs_rebuild_scrub_enabled                                      1
        zfs_rebuild_vdev_limit                                  67108864
        zfs_reconstruct_indirect_combinations_max                   4096
        zfs_recover                                                    0
        zfs_recv_best_effort_corrective                                0
        zfs_recv_queue_ff                                             20
        zfs_recv_queue_length                                   16777216
        zfs_recv_write_batch_size                                1048576
        zfs_removal_ignore_errors                                      0
        zfs_removal_suspend_progress                                   0
        zfs_remove_max_segment                                  16777216
        zfs_resilver_disable_defer                                     0
        zfs_resilver_min_time_ms                                    3000
        zfs_scan_blkstats                                              0
        zfs_scan_checkpoint_intval                                  7200
        zfs_scan_fill_weight                                           3
        zfs_scan_ignore_errors                                         0
        zfs_scan_issue_strategy                                        0
        zfs_scan_legacy                                                0
        zfs_scan_max_ext_gap                                     2097152
        zfs_scan_mem_lim_fact                                         20
        zfs_scan_mem_lim_soft_fact                                    20
        zfs_scan_report_txgs                                           0
        zfs_scan_strict_mem_lim                                        0
        zfs_scan_suspend_progress                                      0
        zfs_scan_vdev_limit                                     16777216
        zfs_scrub_error_blocks_per_txg                              4096
        zfs_scrub_min_time_ms                                       1000
        zfs_send_corrupt_data                                          0
        zfs_send_no_prefetch_queue_ff                                 20
        zfs_send_no_prefetch_queue_length                        1048576
        zfs_send_queue_ff                                             20
        zfs_send_queue_length                                   16777216
        zfs_send_unmodified_spill_blocks                               1
        zfs_sha256_impl       cycle [fastest] generic x64 ssse3 avx avx2
        zfs_sha512_impl             cycle [fastest] generic x64 avx avx2
        zfs_slow_io_events_per_second                                 20
        zfs_snapshot_history_enabled                                   1
        zfs_spa_discard_memory_limit                            16777216
        zfs_special_class_metadata_reserve_pct                        25
        zfs_sync_pass_deferred_free                                    2
        zfs_sync_pass_dont_compress                                    8
        zfs_sync_pass_rewrite                                          2
        zfs_sync_taskq_batch_pct                                      75
        zfs_traverse_indirect_prefetch_limit                          32
        zfs_trim_extent_bytes_max                              134217728
        zfs_trim_extent_bytes_min                                  32768
        zfs_trim_metaslab_skip                                         0
        zfs_trim_queue_limit                                          10
        zfs_trim_txg_batch                                            32
        zfs_txg_history                                              100
        zfs_txg_timeout                                                5
        zfs_unflushed_log_block_max                               131072
        zfs_unflushed_log_block_min                                 1000
        zfs_unflushed_log_block_pct                                  400
        zfs_unflushed_log_txg_max                                   1000
        zfs_unflushed_max_mem_amt                             1073741824
        zfs_unflushed_max_mem_ppm                                   1000
        zfs_unlink_suspend_progress                                    0
        zfs_user_indirect_is_special                                   1
        zfs_vdev_aggregation_limit                               1048576
        zfs_vdev_aggregation_limit_non_rotating                   131072
        zfs_vdev_async_read_max_active                                 3
        zfs_vdev_async_read_min_active                                 1
        zfs_vdev_async_write_active_max_dirty_percent                 60
        zfs_vdev_async_write_active_min_dirty_percent                 30
        zfs_vdev_async_write_max_active                               10
        zfs_vdev_async_write_min_active                                2
        zfs_vdev_def_queue_depth                                      32
        zfs_vdev_default_ms_count                                    200
        zfs_vdev_default_ms_shift                                     29
        zfs_vdev_failfast_mask                                         1
        zfs_vdev_initializing_max_active                               1
        zfs_vdev_initializing_min_active                               1
        zfs_vdev_max_active                                         1000
        zfs_vdev_max_auto_ashift                                      14
        zfs_vdev_max_ms_shift                                         34
        zfs_vdev_min_auto_ashift                                       9
        zfs_vdev_min_ms_count                                         16
        zfs_vdev_mirror_non_rotating_inc                               0
        zfs_vdev_mirror_non_rotating_seek_inc                          1
        zfs_vdev_mirror_rotating_inc                                   0
        zfs_vdev_mirror_rotating_seek_inc                              5
        zfs_vdev_mirror_rotating_seek_offset                     1048576
        zfs_vdev_ms_count_limit                                   131072
        zfs_vdev_nia_credit                                            5
        zfs_vdev_nia_delay                                             5
        zfs_vdev_open_timeout_ms                                    1000
        zfs_vdev_queue_depth_pct                                    1000
        zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2 avx512f avx512bw
        zfs_vdev_read_gap_limit                                    32768
        zfs_vdev_rebuild_max_active                                    3
        zfs_vdev_rebuild_min_active                                    1
        zfs_vdev_removal_max_active                                    2
        zfs_vdev_removal_min_active                                    1
        zfs_vdev_scheduler                                        unused
        zfs_vdev_scrub_max_active                                      3
        zfs_vdev_scrub_min_active                                      1
        zfs_vdev_sync_read_max_active                                 10
        zfs_vdev_sync_read_min_active                                 10
        zfs_vdev_sync_write_max_active                                10
        zfs_vdev_sync_write_min_active                                10
        zfs_vdev_trim_max_active                                       2
        zfs_vdev_trim_min_active                                       1
        zfs_vdev_write_gap_limit                                    4096
        zfs_vnops_read_chunk_size                                1048576
        zfs_wrlog_data_max                                    8589934592
        zfs_xattr_compat                                               0
        zfs_zevent_len_max                                           512
        zfs_zevent_retain_expire_secs                                900
        zfs_zevent_retain_max                                       2000
        zfs_zil_clean_taskq_maxalloc                             1048576
        zfs_zil_clean_taskq_minalloc                                1024
        zfs_zil_clean_taskq_nthr_pct                                 100
        zfs_zil_saxattr                                                1
        zil_maxblocksize                                          131072
        zil_min_commit_timeout                                      5000
        zil_nocacheflush                                               0
        zil_replay_disable                                             0
        zil_slog_bulk                                             786432
        zio_deadman_log_all                                            0
        zio_dva_throttle_enabled                                       1
        zio_requeue_io_start_cut_in_line                               1
        zio_slow_io_ms                                             30000
        zio_taskq_batch_pct                                           80
        zio_taskq_batch_tpq                                            0
        zstd_abort_size                                           131072
        zstd_earlyabort_pass                                           1
        zvol_blk_mq_blocks_per_thread                                  8
        zvol_blk_mq_queue_depth                                      128
        zvol_enforce_quotas                                            1
        zvol_inhibit_dev                                               0
        zvol_major                                                   230
        zvol_max_discard_blocks                                    16384
        zvol_open_timeout_ms                                        1000
        zvol_prefetch_bytes                                       131072
        zvol_request_sync                                              0
        zvol_threads                                                   0
        zvol_use_blk_mq                                                0
        zvol_volmode                                                   2

ZIL committed transactions:                                        91.9M
        Commit requests:                                           18.3M
        Flushes to stable storage:                                 18.1M
        Transactions to SLOG storage pool:            0 Bytes          0
        Transactions to non-SLOG storage pool:      236.3 GiB       8.7M
 
Joined
Oct 22, 2019
Messages
3,641
and some NFS sharing (Linux clients) from a pool of spinners.
SLOG
  • I don't think any of my usage involves sync writes, so this seems unlikely to benefit
What about your NFS clients? If the dataset is "sync=standard", then NFS will use sync writes by default. Perhaps a fast NVMe can increase write performance for the NFS clients?



Move the apps pool
Was going to be my first suggestion, but if your current "SSD-only" pool is fast enough, you may not even notice a difference if your move your Apps to the new pool.



L2ARC
  • I'm not clear on how to interpret the output of arc_summary, but it's included below
I've learned myself that as long as your ARC in RAM is being used effectively, you are unlikely to see benefits with a dedicated L2ARC, even in regards to metadata. For OpenZFS 2.2.0+, performance should remain "smart" and consistent. I'd hold off on installing an L2ARC vdev.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
99.5% ARC hit rate (99.9% for metadata) clearly means that you do NOT need a L2ARC.
NFS defaults to sync writes, though that may be disabled for basic file sharing, and macOS is know to request sync writes over SMB, in particular for TimeMachine, so you might have a use for a SLOG, but it's quite possible that it is not critical and may not make any practical difference.

To keep with the raidz2 data, a special vdev would best be 3-way. And would be irreversible indeed. With 128 GB RAM and 99.9% hit rate for metadata, a special vdev would not even accelerate file browsing.

Moving the app pool and/or boot to M.2 is always a possibility.

Accepting that not all possibilities of the board will be used is also an option.
 
Joined
Oct 22, 2019
Messages
3,641
Or you can do what I did, which sort of decouples my setup (as long as you don't mind the extra cost):


A dedicated fast NVMe 2-way mirror pool for a "dump and go".

A quick share to dump files from your phone or elsewhere. A temporary storage to hold large files that you are unsure what to do with or if you'll decide to permanently save them on your spinners' primary pool sometime later. Think of it like a "staging area". If you lose it? Eh, oh well. Not that important. And you'll also exlcude it from backups as well.

In my case, I use a dataset on my NVMe pool just for this. However, this same NVMe pool also shares my jails and System Dataset. So in your case, it will be a pure "dump and go" fast pool to offset the storage from your primary pool, until you decide what to do with the files.

(This also helps for fragmentation, since large files that you constantly delete because "I don't really need to keep these" will only affect the "dump and go" pool, without contributing to fragmentation on the primary pool. (This also makes a good place to download torrents, sparing your primary pool.)
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Looks like you designed your system properly, that is a very good thing. But as @winnielinnie said, you could make another pool and play around with it. Maybe you will find something useful for it. The downside with not knowing what to do with it is purchasing the correct capacity M.2 sticks.

Yesterday I built a new complete NVMe system with six 4TB NVMe's. I run ESXi so I have two dedicated drives to ESXi and four on a PCIe card as the pool. The price was very good for the NVMe's at $200 USD each. I may rework the pool to use five or all six of the NVMe drives and boot from a SSD, but nothing is set in stone yet. I am realizing right now how difficult it could be to move my drives into another computer should my system fail. It is not as easy as using SATA connections. But I will cross that bridge should I ever need to.

EDIT: Out of curiosity, the arc_summary, how long was the system running before the arc summary was created? Or does it migrate with the drives in the system? I'm asking in case this was just a short time sample whereas a longer period of time would show the real efficiency.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How many concurrent clients do you have and how large do your directories tend to be? In my experience with "an office full" of SMB users a metadata vdev greatly improves the general SMB experience.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Moving the app pool and/or boot to M.2 is always a possibility.
I'd thought of moving the boot pool, but didn't mention it--I don't see that it'd benefit from the extra speed. Apps might.
macOS is know to request sync writes over SMB, in particular for TimeMachine
Interesting--I think I'd missed this. There's very little writing going on via NFS, but a good bit by SMB, including Time Machine.
if your current "SSD-only" pool is fast enough
That's hard to say--the applications page in TrueNAS is quite slow. Pulling up the "edit" page for an app, for example, takes about 15 seconds, which seems awfully slow. But I don't know what's causing that lag. The actual running of the apps is generally fine, though I'm seeing problems with Plex from time to time that I didn't see under CORE (when I was running all my jails on my data pool).
To keep with the raidz2 data, a special vdev would best be 3-way.
...which would mean an add-on card. Not a bad thing; they're cheap. But it doesn't sound like the special vdev would give much benefit anyway.
Accepting that not all possibilities of the board will be used is also an option.
Certainly a possibility. I'm already not using the onboard 10 GbE, so...
Out of curiosity, the arc_summary, how long was the system running before the arc summary was created?
Current uptime is 20 days.
How many concurrent clients do you have
Tricky question. Three Macs in total, but usually not more than two doing anything at any one time. Five Proxmox nodes are connected to NFS exports, but not normally actively using them. Maybe 4-5 VMs.
how large do your directories tend to be?
Some of them are pretty large; a few thousand files and 4+ TB in some of my media directories.
 
Top