Do I need SLOG?

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Yes, need that only. Having 128GB. Seems like first I'll need to test and then check. Just a quick question. If i setup drive for metadata, how would i know the difference? Any particular method to test?
A metadata vdev speeds up both reads and write. It holds part of the pool data, so it is CRITICAL for the pool and needs redundancy. 3-way mirror is what matches raidz2 for bulk data. With HDDs in raidz#, a metadata vdev cannot be removed.
A persistent metadata L2ARC speeds up reads only; writes go the HDD pool anyway. Since the L2ARC only holds a copy of the metadata, it is NOT critical and be removed at any time, even when using raidz#; no redundancy required, if the L2ARC devices dies, you lose performance but not data.
For directory browsing only, a metadata L2ARC is the way, and you have the RAM to support it. Go and get one of these Optane 905p after all…
(A regular SSD would do it, but Optane is better.)

Any mATX board in SuperMicro with the LGA3647 or LGA4189 socket? 2xOnboard SAS port would be nice along with 1xBase-T (10GbE) and probably an SFP+ (10GbE). With the case i have, its difficult to find such board. If you can help or put some insight, it would be really helpful. Thank you again
Not sure why you'd want both SFP+ and 10GBase-T onboard for a NAS. TrueNAS really only want one interface to your network.

If you absolutely insist, embedded Ice Lake-D boards such as X12SDV-8C-SPT8F have both SFP28 (backwards compatible with SFP+) and 10GBase-T onboard (plus an i350-AM4 which leaves me totally clueless, but I don't work in TelCo…). But the higher clocks of a Xeon Scalable would likely serve you best for maximal performance with SMB.
Onboard SAS is out of fashion in favour of NVMe storage, so you'll need a card for that. What use do you have for the PCIe slots anyway?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
If you're on CORE, you can also use the handy zilstat command during a write workload to see if you're actually making use of the ZIL (ZFS Intent Log) which will tell you if you could benefit from a SLOG.
Nice. Will keep in mind.

I'm figuring out the need of metadata drive now.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I'm figuring out the need of metadata drive now.
You most likely do not need it, consider wheter you want L2ARC instead: same (read) benefits, less risks and costs.
A L2ARC drive and the special VDEV are different things.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
A metadata vdev speeds up both reads and write. It holds part of the pool data, so it is CRITICAL for the pool and needs redundancy. 3-way mirror is what matches raidz2 for bulk data. With HDDs in raidz#, a metadata vdev cannot be removed.
A persistent metadata L2ARC speeds up reads only; writes go the HDD pool anyway. Since the L2ARC only holds a copy of the metadata, it is NOT critical and be removed at any time, even when using raidz#; no redundancy required, if the L2ARC devices dies, you lose performance but not data.
Yes, aware of the above facts. I'm just not sure whether i would need to setup a metadata drive for my use case or not.

For directory browsing only, a metadata L2ARC is the way, and you have the RAM to support it. Go and get one of these Optane 905p after all…
(A regular SSD would do it, but Optane is better.)
OMG. Are there several kinds of metadata? I mean i thought there is one simple metadata drive. Of course, redundancy is must. Ideally 3xDrive or 4xDrive. But what is metadata L2ARC here and how it is different than a normal metadata?

Not sure why you'd want both SFP+ and 10GBase-T onboard for a NAS. TrueNAS really only want one interface to your network.
Cause, I'm still on Base-T but soon will be moving to SFP+ so i don't want to be in a situation where i need to buy another board or a NIC.

If you absolutely insist, embedded Ice Lake-D boards such as X12SDV-8C-SPT8F have both SFP28 (backwards compatible with SFP+) and 10GBase-T onboard (plus an i350-AM4 which leaves me totally clueless, but I don't work in TelCo…). But the higher clocks of a Xeon Scalable would likely serve you best for maximal performance with SMB.
I actually searched but those are so much old Xeons and the onboard SAS is too old and I'm not sure if it would work any good with TrueNAS so i avoided that in the first place.

Onboard SAS is out of fashion in favour of NVMe storage, so you'll need a card for that.
Yes, yes. Having LSI 9400-16i. Bought a couple of months ago.

What use do you have for the PCIe slots anyway?
Reserved for the HBA Card if the onboard isn't sufficient or in case if i need to expand.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
You most likely do not need it, consider wheter you want L2ARC instead: same (read) benefits, less risks and costs.
Hmm. I see. What about the Time Machine restoration time? I don't want that the restore takes days....

Secondly, i don't want that it takes too much time to browse into a directory or when getting the info of a directory.

A L2ARC drive and the special VDEV are different things.
Yes, aware of that. I don't need an L2ARC at least for now. I can put 256GB if needed. But also, a question is, how do i see whether i need to put more RAM? I know that one should try to upgrade the RAM in the first place and if not possible, then only setup L2ARC, RAM being the fastest. Also, is there any particular amount/size that is required in terms of RAM before one tries to setup L2ARC?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Look at your ARC stats, specifically the hit rate. If that is near 100% you are already going as fast as you can.

If this is going to be a CORE installation I could share some nice Grafana dashboards ...
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I don't need an L2ARC at least for now.
Then you don't need a fusion pool with a metadata VDEV either.

Also, is there any particular amount/size that is required in terms of RAM before one tries to setup L2ARC?
At least 64GB, usually it's suggested to max out your RAM before going 2LARC.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
OMG. Are there several kinds of metadata? I mean i thought there is one simple metadata drive. Of course, redundancy is must. Ideally 3xDrive or 4xDrive. But what is metadata L2ARC here and how it is different than a normal metadata?
An L2ARC doesn't need redundancy, because it only ever holds a copy of data that has a stable home on the pool. Lose the L2ARC, and it's the same as a "cache miss" - you retrieve the data from the pool. L2ARC devices can be added or removed from any pool while it's online.

A dedicated metadata vdev is that "stable home on the pool" and thus needs to be redundant, because there's no other place to get the data from if it's gone. Metadata vdevs can be added to a pool, but can only be removed under a fixed set of conditions - the foremost of is "your pool does not have any RAIDZ vdevs in it" and I believe yours already does. So don't "add one to test it out" or else you're stuck with it.

A "metadata L2ARC" would be adding an L2ARC device, and then setting secondarycache=metadata on the pool or dataset(s) in question, thus telling it to store only the metadata there. ZFS metadata reads would then hopefully come from that L2ARC drive but without the risk of losing your pool if it suffers a failure, and retaining the ability to remove it later.

But as suggested by @Patrick M. Hausen and @Davvo - the general wisdom is to maximize RAM first, and monitor your arc_summary results to see if you have a large amount of metadata demand misses. Metadata is generally only 1% or less of the size of your data, so it should be reasonably easy to have it staying in your RAM.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Cause, I'm still on Base-T but soon will be moving to SFP+ so i don't want to be in a situation where i need to buy another board or a NIC.
You can always put a RJ45 transceiver in a SFP+ cage, so this reads like you should go for SFP+, be it onboard or as a NIC.

I actually searched but those are so much old Xeons and the onboard SAS is too old and I'm not sure if it would work any good with TrueNAS so i avoided that in the first place.
Is this a comment on the X12SDV (current embedded generation, only one generation behind Sapphire Rapids) I linked, or are you confusing it with Broadwell-D X10SDV?

Yes, yes. Having LSI 9400-16i. Bought a couple of months ago.
Hmm, 9400 is maybe a bit TOO new…

Reserved for the HBA Card if the onboard isn't sufficient or in case if i need to expand.
So a X11SPM-TPF with onboard SFP+, 12 SATA and x16/x16/x8 PCIe slots should be plenty, including future expansion.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Look at your ARC stats, specifically the hit rate. If that is near 100% you are already going as fast as you can.

If this is going to be a CORE installation I could share some nice Grafana dashboards ...
Will share the information soon. Yes, currently on CORE and I'm experimenting things out before i deem it as final.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
An L2ARC doesn't need redundancy, because it only ever holds a copy of data that has a stable home on the pool. Lose the L2ARC, and it's the same as a "cache miss" - you retrieve the data from the pool. L2ARC devices can be added or removed from any pool while it's online.

A dedicated metadata vdev is that "stable home on the pool" and thus needs to be redundant, because there's no other place to get the data from if it's gone. Metadata vdevs can be added to a pool, but can only be removed under a fixed set of conditions - the foremost of is "your pool does not have any RAIDZ vdevs in it" and I believe yours already does. So don't "add one to test it out" or else you're stuck with it.

A "metadata L2ARC" would be adding an L2ARC device, and then setting secondarycache=metadata on the pool or dataset(s) in question, thus telling it to store only the metadata there. ZFS metadata reads would then hopefully come from that L2ARC drive but without the risk of losing your pool if it suffers a failure, and retaining the ability to remove it later.

But as suggested by @Patrick M. Hausen and @Davvo - the general wisdom is to maximize RAM first, and monitor your arc_summary results to see if you have a large amount of metadata demand misses. Metadata is generally only 1% or less of the size of your data, so it should be reasonably easy to have it staying in your RAM.
Hey ya,

Sorry for the late reply. I was busy in testing out things. So, i had a spare consumer drive and i thought to set it up as a Metadata and holyshit. It saves like 2mins of time. The NVMe was SK Hynix PC801 512GB installed in a PCIe 3.0 NVMe slot. It has no PLP, single driver and not that much of IOPS and Latency like Optanes but i get the idea now. Seems like this could be actually beneficial for my use case.

I understood all your points above and now would like to setup a metadata L2ARC as this sounds really helpful. A special Vdev for faster pool performance+you don't need to worry so much about redundancy and can mitigate the risk.

One more question i have is. I read that when you don't add a special vdev initially at the time of pool creation, the existing pool will have its metadata in the Vdev itself and when you add a new metadata vdev, only newer metadata will be stored on the new metadata vdev. Is that correct?

Also, if i understand correct, the metadata is still stored in the vdev but will be copied to the L2ARC metadata device until the pool is up? Something like cache?

Secondly, how much is the performance difference between a dedicated metadata vdev and only metadata L2ARC?

I ran the test and here's what i see now. Please note it has a dedicated metadata vdev set at the moment. Not sure if that's going to make any difference in the below report

Code:
Warning: the supported mechanisms for making configuration changes
are the TrueNAS WebUI and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.

root@truenas[~]# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Sun Dec 03 15:37:43 2023
FreeBSD 13.1-RELEASE-p9                                    zpl version 5
Machine: truenas.local (amd64)                          spa version 5000

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    17.3 %   10.8 GiB
        Target size (adaptive):                        17.3 %   10.9 GiB
        Min size (hard limit):                          3.2 %    2.0 GiB
        Max size (high water):                           31:1   62.7 GiB
        Most Frequently Used (MFU) cache size:          2.1 %  229.3 MiB
        Most Recently Used (MRU) cache size:           97.9 %   10.3 GiB
        Metadata cache size (hard limit):              75.0 %   47.1 GiB
        Metadata cache size (current):                  1.0 %  492.2 MiB
        Dnode cache size (hard limit):                 10.0 %    4.7 GiB
        Dnode cache size (current):                     2.6 %  125.2 MiB

ARC hash breakdown:
        Elements max:                                             241.1k
        Elements current:                             100.0 %     241.1k
        Collisions:                                                 3.4k
        Chain max:                                                     2
        Chains:                                                     3.3k

ARC misc:
        Deleted:                                                      21
        Mutex misses:                                                  0
        Eviction skips:                                             2.1k
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   1.0 MiB
        L2 eligible MFU evictions:                      0.0 %    0 Bytes
        L2 eligible MRU evictions:                    100.0 %    1.0 MiB
        L2 ineligible evictions:                                 4.0 KiB

ARC total accesses (hits + misses):                                 2.6M
        Cache hit ratio:                               90.7 %       2.4M
        Cache miss ratio:                               9.3 %     241.1k
        Actual hit ratio (MFU + MRU hits):             90.4 %       2.4M
        Data demand efficiency:                        26.3 %     243.8k
        Data prefetch efficiency:                     < 0.1 %      34.7k

Cache hits by cache type:
        Most frequently used (MFU):                    85.2 %       2.0M
        Most recently used (MRU):                      14.5 %     342.0k
        Most frequently used (MFU) ghost:               0.0 %          0
        Most recently used (MRU) ghost:                 0.0 %          0
        Anonymously used:                               0.4 %       8.4k

Cache hits by data type:
        Demand data:                                    2.7 %      64.1k
        Prefetch data:                                < 0.1 %          7
        Demand metadata:                               96.9 %       2.3M
        Prefetch metadata:                              0.4 %       9.8k

Cache misses by data type:
        Demand data:                                   74.5 %     179.7k
        Prefetch data:                                 14.4 %      34.7k
        Demand metadata:                                8.2 %      19.7k
        Prefetch metadata:                              2.9 %       7.0k

DMU prefetch efficiency:                                           30.5k
        Hit ratio:                                     64.2 %      19.6k
        Miss ratio:                                    35.8 %      10.9k

L2ARC not detected, skipping section

Tunables:
        abd_scatter_enabled                                            1
        abd_scatter_min_size                                        4097
        allow_redacted_dataset_mount                                   0
        anon_data_esize                                                0
        anon_metadata_esize                                            0
        anon_size                                                  17920
        arc.average_blocksize                                       8192
        arc.dnode_limit                                                0
        arc.dnode_limit_percent                                       10
        arc.dnode_reduce_percent                                      10
        arc.evict_batch_limit                                         10
        arc.eviction_pct                                             200
        arc.grow_retry                                                 0
        arc.lotsfree_percent                                          10
        arc.max                                                        0
        arc.meta_adjust_restarts                                    4096
        arc.meta_limit                                                 0
        arc.meta_limit_percent                                        75
        arc.meta_min                                                   0
        arc.meta_prune                                             10000
        arc.meta_strategy                                              1
        arc.min                                                        0
        arc.min_prefetch_ms                                            0
        arc.min_prescient_prefetch_ms                                  0
        arc.p_dampener_disable                                         1
        arc.p_min_shift                                                0
        arc.pc_percent                                                 0
        arc.prune_task_threads                                         1
        arc.shrink_shift                                               0
        arc.sys_free                                                   0
        arc_free_target                                           346502
        arc_max                                                        0
        arc_min                                                        0
        arc_no_grow_shift                                              5
        async_block_max_blocks                      18446744073709551615
        autoimport_disable                                             1
        btree_verify_intensity                                         0
        ccw_retry_interval                                           300
        checksum_events_per_second                                    20
        commit_timeout_pct                                             5
        compressed_arc_enabled                                         1
        condense.indirect_commit_entry_delay_ms                        0
        condense.indirect_obsolete_pct                                25
        condense.indirect_vdevs_enable                                 1
        condense.max_obsolete_bytes                           1073741824
        condense.min_mapping_bytes                                131072
        condense_pct                                                 200
        crypt_sessions                                                 0
        dbgmsg_enable                                                  1
        dbgmsg_maxsize                                           4194304
        dbuf.cache_shift                                               5
        dbuf.metadata_cache_max_bytes               18446744073709551615
        dbuf.metadata_cache_shift                                      6
        dbuf_cache.hiwater_pct                                        10
        dbuf_cache.lowater_pct                                        10
        dbuf_cache.max_bytes                        18446744073709551615
        dbuf_state_index                                               0
        ddt_data_is_special                                            1
        deadman.checktime_ms                                       60000
        deadman.enabled                                                1
        deadman.failmode                                            wait
        deadman.synctime_ms                                       600000
        deadman.ziotime_ms                                        300000
        debug                                                          0
        debugflags                                                     0
        dedup.prefetch                                                 0
        default_bs                                                     9
        default_ibs                                                   15
        delay_min_dirty_percent                                       60
        delay_scale                                               500000
        dirty_data_max                                        4294967296
        dirty_data_max_max                                    4294967296
        dirty_data_max_max_percent                                    25
        dirty_data_max_percent                                        10
        dirty_data_sync_percent                                       20
        disable_ivset_guid_check                                       0
        dmu_object_alloc_chunk_shift                                   7
        dmu_offset_next_sync                                           1
        dmu_prefetch_max                                       134217728
        dtl_sm_blksz                                                4096
        embedded_slog_min_ms                                          64
        flags                                                          0
        fletcher_4_impl [fastest] scalar superscalar superscalar4 sse2 ssse3
        free_bpobj_enabled                                             1
        free_leak_on_eio                                               0
        free_min_time_ms                                            1000
        history_output_max                                       1048576
        immediate_write_sz                                         32768
        initialize_chunk_size                                    1048576
        initialize_value                            16045690984833335022
        keep_log_spacemaps_at_export                                   0
        l2arc.exclude_special                                          0
        l2arc.feed_again                                               1
        l2arc.feed_min_ms                                            200
        l2arc.feed_secs                                                1
        l2arc.headroom                                                 2
        l2arc.headroom_boost                                         200
        l2arc.meta_percent                                            33
        l2arc.mfuonly                                                  0
        l2arc.noprefetch                                               1
        l2arc.norw                                                     0
        l2arc.rebuild_blocks_min_l2size                       1073741824
        l2arc.rebuild_enabled                                          0
        l2arc.trim_ahead                                               0
        l2arc.write_boost                                        8388608
        l2arc.write_max                                          8388608
        l2arc_feed_again                                               1
        l2arc_feed_min_ms                                            200
        l2arc_feed_secs                                                1
        l2arc_headroom                                                 2
        l2arc_noprefetch                                               1
        l2arc_norw                                                     0
        l2arc_write_boost                                        8388608
        l2arc_write_max                                          8388608
        l2c_only_size                                                  0
        livelist.condense.new_alloc                                    0
        livelist.condense.sync_cancel                                  0
        livelist.condense.sync_pause                                   0
        livelist.condense.zthr_cancel                                  0
        livelist.condense.zthr_pause                                   0
        livelist.max_entries                                      500000
        livelist.min_percent_shared                                   75
        lua.max_instrlimit                                     100000000
        lua.max_memlimit                                       104857600
        max_async_dedup_frees                                     100000
        max_auto_ashift                                               14
        max_dataset_nesting                                           50
        max_log_walking                                                5
        max_logsm_summary_length                                      10
        max_missing_tvds                                               0
        max_missing_tvds_cachefile                                     2
        max_missing_tvds_scan                                          0
        max_nvlist_src_size                                            0
        max_recordsize                                           1048576
        metaslab.aliquot                                         1048576
        metaslab.bias_enabled                                          1
        metaslab.debug_load                                            0
        metaslab.debug_unload                                          0
        metaslab.df_alloc_threshold                               131072
        metaslab.df_free_pct                                           4
        metaslab.df_max_search                                  16777216
        metaslab.df_use_largest_segment                                0
        metaslab.find_max_tries                                      100
        metaslab.force_ganging                                  16777217
        metaslab.fragmentation_factor_enabled                          1
        metaslab.fragmentation_threshold                              70
        metaslab.lba_weighting_enabled                                 1
        metaslab.load_pct                                             50
        metaslab.max_size_cache_sec                                 3600
        metaslab.mem_limit                                            25
        metaslab.preload_enabled                                       1
        metaslab.preload_limit                                        10
        metaslab.segment_weight_enabled                                1
        metaslab.sm_blksz_no_log                                   16384
        metaslab.sm_blksz_with_log                                131072
        metaslab.switch_threshold                                      2
        metaslab.try_hard_before_gang                                  0
        metaslab.unload_delay                                         32
        metaslab.unload_delay_ms                                  600000
        mfu_data_esize                                         105253888
        mfu_ghost_data_esize                                           0
        mfu_ghost_metadata_esize                                       0
        mfu_ghost_size                                                 0
        mfu_metadata_esize                                      18494976
        mfu_size                                               240389632
        mg.fragmentation_threshold                                    95
        mg.noalloc_threshold                                           0
        min_auto_ashift                                                9
        min_metaslabs_to_flush                                         1
        mru_data_esize                                       10455247872
        mru_ghost_data_esize                                           0
        mru_ghost_metadata_esize                                       0
        mru_ghost_size                                                 0
        mru_metadata_esize                                      57638400
        mru_size                                             11079056384
        multihost.fail_intervals                                      10
        multihost.history                                              0
        multihost.import_intervals                                    20
        multihost.interval                                          1000
        multilist_num_sublists                                         0
        no_scrub_io                                                    0
        no_scrub_prefetch                                              0
        nocacheflush                                                   0
        nopwrite_enabled                                               1
        obsolete_min_time_ms                                         500
        pd_bytes_max                                            52428800
        per_txg_dirty_frees_percent                                   30
        prefetch.array_rd_sz                                     1048576
        prefetch.disable                                               0
        prefetch.max_distance                                   67108864
        prefetch.max_idistance                                  67108864
        prefetch.max_sec_reap                                          2
        prefetch.max_streams                                           8
        prefetch.min_distance                                    4194304
        prefetch.min_sec_reap                                          1
        read_history                                                   0
        read_history_hits                                              0
        rebuild_max_segment                                      1048576
        rebuild_scrub_enabled                                          1
        rebuild_vdev_limit                                      67108864
        reconstruct.indirect_combinations_max                       4096
        recover                                                        0
        recv.queue_ff                                                 20
        recv.queue_length                                       16777216
        recv.write_batch_size                                    1048576
        removal_suspend_progress                                       0
        remove_max_segment                                      16777216
        resilver_disable_defer                                         0
        resilver_min_time_ms                                        3000
        scan_blkstats                                                  0
        scan_checkpoint_intval                                      7200
        scan_fill_weight                                               3
        scan_ignore_errors                                             0
        scan_issue_strategy                                            0
        scan_legacy                                                    0
        scan_max_ext_gap                                         2097152
        scan_mem_lim_fact                                             20
        scan_mem_lim_soft_fact                                        20
        scan_report_txgs                                               0
        scan_strict_mem_lim                                            0
        scan_suspend_progress                                          0
        scan_vdev_limit                                         16777216
        scrub_min_time_ms                                           1000
        send.corrupt_data                                              0
        send.no_prefetch_queue_ff                                     20
        send.no_prefetch_queue_length                            1048576
        send.override_estimate_recordsize                              0
        send.queue_ff                                                 20
        send.queue_length                                       16777216
        send.unmodified_spill_blocks                                   1
        send_holes_without_birth_time                                  1
        slow_io_events_per_second                                     20
        spa.asize_inflation                                           24
        spa.discard_memory_limit                                16777216
        spa.load_print_vdev_tree                                       0
        spa.load_verify_data                                           1
        spa.load_verify_metadata                                       1
        spa.load_verify_shift                                          4
        spa.slop_shift                                                 5
        space_map_ibs                                                 14
        special_class_metadata_reserve_pct                            25
        standard_sm_blksz                                         131072
        super_owner                                                    0
        sync_pass_deferred_free                                        2
        sync_pass_dont_compress                                        8
        sync_pass_rewrite                                              2
        sync_taskq_batch_pct                                          75
        top_maxinflight                                             1000
        traverse_indirect_prefetch_limit                              32
        trim.extent_bytes_max                                  134217728
        trim.extent_bytes_min                                      32768
        trim.metaslab_skip                                             0
        trim.queue_limit                                              10
        trim.txg_batch                                                32
        txg.history                                                  100
        txg.timeout                                                    5
        unflushed_log_block_max                                   131072
        unflushed_log_block_min                                     1000
        unflushed_log_block_pct                                      400
        unflushed_log_txg_max                                       1000
        unflushed_max_mem_amt                                 1073741824
        unflushed_max_mem_ppm                                       1000
        user_indirect_is_special                                       1
        validate_skip                                                  0
        vdev.aggregate_trim                                            0
        vdev.aggregation_limit                                   1048576
        vdev.aggregation_limit_non_rotating                       131072
        vdev.async_read_max_active                                     3
        vdev.async_read_min_active                                     1
        vdev.async_write_active_max_dirty_percent                     60
        vdev.async_write_active_min_dirty_percent                     30
        vdev.async_write_max_active                                    5
        vdev.async_write_min_active                                    1
        vdev.bio_delete_disable                                        0
        vdev.bio_flush_disable                                         0
        vdev.cache_bshift                                             16
        vdev.cache_max                                             16384
        vdev.cache_size                                                0
        vdev.def_queue_depth                                          32
        vdev.default_ms_count                                        200
        vdev.default_ms_shift                                         29
        vdev.file.logical_ashift                                       9
        vdev.file.physical_ashift                                      9
        vdev.initializing_max_active                                   1
        vdev.initializing_min_active                                   1
        vdev.max_active                                             1000
        vdev.max_auto_ashift                                          14
        vdev.min_auto_ashift                                           9
        vdev.min_ms_count                                             16
        vdev.mirror.non_rotating_inc                                   0
        vdev.mirror.non_rotating_seek_inc                              1
        vdev.mirror.rotating_inc                                       0
        vdev.mirror.rotating_seek_inc                                  5
        vdev.mirror.rotating_seek_offset                         1048576
        vdev.ms_count_limit                                       131072
        vdev.nia_credit                                                5
        vdev.nia_delay                                                 5
        vdev.queue_depth_pct                                        1000
        vdev.read_gap_limit                                        32768
        vdev.rebuild_max_active                                        3
        vdev.rebuild_min_active                                        1
        vdev.removal_ignore_errors                                     0
        vdev.removal_max_active                                        2
        vdev.removal_max_span                                      32768
        vdev.removal_min_active                                        1
        vdev.removal_suspend_progress                                  0
        vdev.remove_max_segment                                 16777216
        vdev.scrub_max_active                                          3
        vdev.scrub_min_active                                          1
        vdev.sync_read_max_active                                     10
        vdev.sync_read_min_active                                     10
        vdev.sync_write_max_active                                    10
        vdev.sync_write_min_active                                    10
        vdev.trim_max_active                                           2
        vdev.trim_min_active                                           1
        vdev.validate_skip                                             0
        vdev.write_gap_limit                                        4096
        version.acl                                                    1
        version.ioctl                                                 15
        version.module                         v2023100900-zfs_dd2649a68
        version.spa                                                 5000
        version.zpl                                                    5
        vnops.read_chunk_size                                    1048576
        vol.mode                                                       2
        vol.recursive                                                  0
        vol.unmap_enabled                                              1
        wrlog_data_max                                        8589934592
        xattr_compat                                                   1
        zap_iterate_prefetch                                           1
        zevent.len_max                                               512
        zevent.retain_expire_secs                                    900
        zevent.retain_max                                           2000
        zfetch.max_distance                                     67108864
        zfetch.max_idistance                                    67108864
        zil.clean_taskq_maxalloc                                 1048576
        zil.clean_taskq_minalloc                                    1024
        zil.clean_taskq_nthr_pct                                     100
        zil.maxblocksize                                          131072
        zil.min_commit_timeout                                      5000
        zil.nocacheflush                                               0
        zil.replay_disable                                             0
        zil.slog_bulk                                             786432
        zio.deadman_log_all                                            0
        zio.dva_throttle_enabled                                       1
        zio.exclude_metadata                                           0
        zio.requeue_io_start_cut_in_line                               1
        zio.slow_io_ms                                             30000
        zio.taskq_batch_pct                                           80
        zio.taskq_batch_tpq                                            0
        zio.use_uma                                                    1

VDEV cache disabled, skipping section

ZIL committed transac
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
You can always put a RJ45 transceiver in a SFP+ cage, so this reads like you should go for SFP+, be it onboard or as a NIC.
Gotcha

Is this a comment on the X12SDV (current embedded generation, only one generation behind Sapphire Rapids) I linked, or are you confusing it with Broadwell-D X10SDV?
You got me ;)

Hmm, 9400 is maybe a bit TOO new…
Seems to work fine!

So a X11SPM-TPF with onboard SFP+, 12 SATA and x16/x16/x8 PCIe slots should be plenty, including future expansion.
Yes, its not final for now, but hopefully yes. What disappoints me is no option for 2xNVMe slots. I mean its a costly board, why not give 2xNVMe slots so that people can make the boot pool redundant in mission critical and enterprise environments without investing onto another piece of hardware.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Well, now you have to either destroy your pool or add enough drives to match it's parity.
L2ARC is cache, a metadata special VDEV is not: you can lose the first, you cannot lose the second.

Yes, its not final for now, but hopefully yes. What disappoints me is no option for 2xNVMe slots. I mean its a costly board, why not give 2xNVMe slots so that people can make the boot pool redundant in mission critical and enterprise environments without investing onto another piece of hardware.
It's useless unless you go with three drives, and only useful if the system is deployed far from you so it's inconvenient to physically service it.

Usually, a config backup and a new SSD suffice.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Yes, its not final for now, but hopefully yes. What disappoints me is no option for 2xNVMe slots. I mean its a costly board, why not give 2xNVMe slots so that people can make the boot pool redundant in mission critical and enterprise environments without investing onto another piece of hardware.
Well, you can mirror SATADOMs for a boot pool, or host your M.2 on a riser card. But there just isn't any real estate for a second M.2—there's already a Xeon Scalable socket, just enough RAM sockets to use all its channels, just enough PCIe slots to expose all available CPU lanes, SFP+ cages the C622 chipset and one M.2 2280 crammed into micro-ATX.
For home use, there's no point in redundant boot drives.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Well, now you have to either destroy your pool or add enough drives to match it's parity.
L2ARC is cache, a metadata special VDEV is not: you can lose the first, you cannot lose the second.
Of course, of course. I understand that. But I'm still experimenting and its not a production machine yet. So wanted to test L2ARC metadata.

It's useless unless you go with three drives, and only useful if the system is deployed far from you so it's inconvenient to physically service it.
Cool. Got it!

Usually, a config backup and a new SSD suffice.
Perfect!
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Well, you can mirror SATADOMs for a boot pool, or host your M.2 on a riser card. But there just isn't any real estate for a second M.2—there's already a Xeon Scalable socket, just enough RAM sockets to use all its channels, just enough PCIe slots to expose all available CPU lanes, SFP+ cages the C622 chipset and one M.2 2280 crammed into micro-ATX.
Of course, of course, that works but it would be really nice to have 2x drives for that purpose.

For home use, there's no point in redundant boot drives.
Well, i get your point. But sometimes home user do have important data when one works from home which involves development and video editing.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Well, i get your point. But sometimes home user do have important data when one works from home which involves development and video editing.
As long as you have a backup config, the boot pool is disposable (and even the backup config is more of a convenience thing, data is never at risk even without it).

In regards to your arc_summary output, you should let the cache grow before extrapolating any data from it: 11 GB of ARC is too little considering you have 64 GB of RAM, any insight taken from it would be misleading.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
As long as you have a backup config, the boot pool is disposable (and even the backup config is more of a convenience thing, data is never at risk even without it).
I get that but my concern is what if during write operation, the boot pool fails out of the blue? Will it cause any error on the data pool?

In regards to your arc_summary output, you should let the cache grow before extrapolating any data from it: 11 GB of ARC is too little considering you have 64 GB of RAM, any insight taken from it would be misleading.
Umm, sorry to say but i don't understand. Can you please rephrase it for me? I'm not trying to setup L2ARC here but L2ARC metadata.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Likely exploring the arc_summary data.


There are 3 models for each socket that fit these requirements, respectively:
For the other requirements, please see for yourself. You likely won't find both 10Gbps SFP+ and Base-T on the same board.
SFP+ has advantage that you can get 10G-T modules for those ports :D

Wasn't really the case until past couple years.
 
Top