L2ARC working but not for file share.

James Gardiner

Dabbler
Joined
Jul 14, 2017
Messages
19
Hi,
I would like to ask a question.
I have a large L2ARC, 2xNVMe 4TB SSD connected to a test 6 drive array, est. 20TB.
When on the local system, doing "dd" commands to test if the L2ARC is working, it does appear so with 2700MB/s reads from files on the share once in the L2ARC.

I then have a nfs share, v4.1 to a server. iperf3 reports 9.4 Gbs or there about, What is expected with a 10-Gbe network interfaces between the FreeNAS server and the debian based client.

If I do test reads from the FreeNAS server using NFS4.1 of files what fit into the main memory cache, I get somewhat the speed I expect, but when I do a read of a larger file say 50gig, even though on the FreeNAS server I can archive 2700MB/s On the client over NFS, it appears to be falling back to sequinous disk speeds the 6 drive array is capable of.

Is this expected? Is there any form of limitation of L2ARC and nfs share?

test/examples follow..

Code:
-- Server test dd
[root@freenastest /mnt/testPool3/nfs_share/LoveSarah_FTR-1]# dd if=LoveSarah_FTR-1_F_EN-XX_AU_51_2K_RIAL_20200306_SIL_IOP_OV_02.mxf  of=/dev/null bs=4M status=progress
  24335351808 bytes (24 GB, 23 GiB) transferred 9.002s, 2703 MB/s   
6377+1 records in
6377+1 records out
26748543781 bytes transferred in 9.974685 secs (2681643078 bytes/sec)
--- Client
dd if=LoveSarah_FTR-1_F_EN-XX_AU_51_2K_RIAL_20200306_SIL_IOP_OV_02.mxf  of=/dev/null bs=4M status=progress
26315063296 bytes (26 GB, 25 GiB) copied, 91 s, 289 MB/s
6377+1 records in
6377+1 records out
26748543781 bytes (27 GB, 25 GiB) copied, 91.9628 s, 291 MB/s
--- Client test dd of smaller file..
root@proxmox1:/mnt/LoveSarah_FTR-1# dd if=LoveSarah_FTR-1_F_EN-XX_AU_51_2K_RIAL_20200306_SIL_IOP_OV_07_audio.mxf  of=/dev/null bs=4M status=progress
271+1 records in
271+1 records out
1140589994 bytes (1.1 GB, 1.1 GiB) copied, 0.350412 s, 3.3 GB/s

---
root@proxmox1:/mnt/LoveSarah_FTR-1# iperf3 -R -c 10.11.2.222
Connecting to host 10.11.2.222, port 5201
Reverse mode, remote host 10.11.2.222 is sending
[  5] local 10.11.2.211 port 54548 connected to 10.11.2.222 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.04 GBytes  8.89 Gbits/sec                 
[  5]   1.00-2.00   sec  1.09 GBytes  9.35 Gbits/sec                 
[  5]   2.00-3.00   sec  1.09 GBytes  9.32 Gbits/sec                 
[  5]   3.00-4.00   sec  1.09 GBytes  9.36 Gbits/sec                 
[  5]   4.00-5.00   sec  1.09 GBytes  9.34 Gbits/sec                 
[  5]   5.00-6.00   sec  1.09 GBytes  9.34 Gbits/sec                 
[  5]   6.00-7.00   sec  1.09 GBytes  9.33 Gbits/sec                 
[  5]   7.00-8.00   sec  1.09 GBytes  9.34 Gbits/sec                 
[  5]   8.00-9.00   sec  1.02 GBytes  8.76 Gbits/sec                 
[  5]   9.00-10.00  sec  1003 MBytes  8.41 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.7 GBytes  9.15 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  10.6 GBytes  9.14 Gbits/sec                  receiver

iperf Done.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What do your arcstats look like? (arc_summary.py and arcstat.py 1 20 while running the copy) And the reports chart when running those copies?
 

James Gardiner

Dabbler
Joined
Jul 14, 2017
Messages
19
Hi,
As you request..
arc_summary.py
Code:
[root@freenastest /]# arc_summary.py
System Memory:

    0.33%    156.54    MiB Active,    2.72%    1.27    GiB Inact
    93.09%    43.48    GiB Wired,    0.00%    0    Bytes Cache
    3.75%    1.75    GiB Free,    0.11%    52.09    MiB Gap

    Real Installed:                48.00    GiB
    Real Available:            99.88%    47.94    GiB
    Real Managed:            97.41%    46.70    GiB

    Logical Total:                48.00    GiB
    Logical Used:            93.70%    44.98    GiB
    Logical Free:            6.30%    3.02    GiB

Kernel Memory:                    673.84    MiB
    Data:                93.18%    627.90    MiB
    Text:                6.82%    45.94    MiB

Kernel Memory Map:                46.70    GiB
    Size:                5.31%    2.48    GiB
    Free:                94.69%    44.22    GiB
                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
    Storage pool Version:            5000
    Filesystem Version:            5
    Memory Throttle Count:            0

ARC Misc:
    Deleted:                38.13k
    Mutex Misses:                112
    Evict Skips:                112

ARC Size:                88.91%    40.63    GiB
    Target Size: (Adaptive)        88.86%    40.61    GiB
    Min Size (Hard Limit):        12.50%    5.71    GiB
    Max Size (High Water):        8:1    45.70    GiB

ARC Size Breakdown:
    Recently Used Cache Size:    85.16%    34.60    GiB
    Frequently Used Cache Size:    14.84%    6.03    GiB

ARC Hash Breakdown:
    Elements Max:                168.34k
    Elements Current:        99.98%    168.31k
    Collisions:                28.19k
    Chain Max:                2
    Chains:                    1.69k
                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                    68.11m
    Cache Hit Ratio:        96.52%    65.74m
    Cache Miss Ratio:        3.48%    2.37m
    Actual Hit Ratio:        95.34%    64.94m

    Data Demand Efficiency:        98.32%    5.50m
    Data Prefetch Efficiency:    87.34%    1.27m

    CACHE HITS BY CACHE LIST:
      Anonymously Used:        1.22%    805.05k
      Most Recently Used:        8.50%    5.59m
      Most Frequently Used:        90.28%    59.35m
      Most Recently Used Ghost:    0.00%    1
      Most Frequently Used Ghost:    0.00%    993

    CACHE HITS BY DATA TYPE:
      Demand Data:            8.23%    5.41m
      Prefetch Data:        1.69%    1.11m
      Demand Metadata:        90.07%    59.21m
      Prefetch Metadata:        0.02%    10.85k

    CACHE MISSES BY DATA TYPE:
      Demand Data:            3.91%    92.63k
      Prefetch Data:        6.80%    160.92k
      Demand Metadata:        89.20%    2.11m
      Prefetch Metadata:        0.09%    2.13k
                                Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
    Passed Headroom:            70.07k
    Tried Lock Failures:            15.02k
    IO In Progress:                0
    Low Memory Aborts:            0
    Free on Write:                120
    Writes While Full:            18
    R/W Clashes:                0
    Bad Checksums:                0
    IO Errors:                0
    SPA Mismatch:                39.36m

L2 ARC Size: (Adaptive)                74.34    GiB
    Compressed:            99.97%    74.32    GiB
    Header Size:            0.00%    2.98    MiB

L2 ARC Breakdown:                2.36m
    Hit Ratio:            0.00%    9
    Miss Ratio:            100.00%    2.36m
    Feeds:                    153.53k

L2 ARC Buffer:
    Bytes Scanned:                285.97    TiB
    Buffer Iterations:            153.53k
    List Iterations:            614.14k
    NULL List Iterations:            611

L2 ARC Writes:
    Writes Sent:            100.00%    17.08k
                                Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:            17.67m
    Hit Ratio:            6.59%    1.16m
    Miss Ratio:            93.41%    16.50m

                                Page:  5
------------------------------------------------------------------------

                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
    kern.maxusers                           3404
    vm.kmem_size                            50144825344
    vm.kmem_size_scale                      1
    vm.kmem_size_min                        0
    vm.kmem_size_max                        1319413950874
    vfs.zfs.vol.immediate_write_sz          32768
    vfs.zfs.vol.unmap_sync_enabled          0
    vfs.zfs.vol.unmap_enabled               1
    vfs.zfs.vol.recursive                   0
    vfs.zfs.vol.mode                        2
    vfs.zfs.sync_pass_rewrite               2
    vfs.zfs.sync_pass_dont_compress         5
    vfs.zfs.sync_pass_deferred_free         2
    vfs.zfs.zio.dva_throttle_enabled        1
    vfs.zfs.zio.exclude_metadata            0
    vfs.zfs.zio.use_uma                     1
    vfs.zfs.zio.taskq_batch_pct             75
    vfs.zfs.zil_maxblocksize                131072
    vfs.zfs.zil_slog_bulk                   786432
    vfs.zfs.zil_nocacheflush                0
    vfs.zfs.zil_replay_disable              0
    vfs.zfs.version.zpl                     5
    vfs.zfs.version.spa                     5000
    vfs.zfs.version.acl                     1
    vfs.zfs.version.ioctl                   7
    vfs.zfs.debug                           0
    vfs.zfs.super_owner                     0
    vfs.zfs.immediate_write_sz              32768
    vfs.zfs.cache_flush_disable             0
    vfs.zfs.standard_sm_blksz               131072
    vfs.zfs.dtl_sm_blksz                    4096
    vfs.zfs.min_auto_ashift                 12
    vfs.zfs.max_auto_ashift                 13
    vfs.zfs.vdev.def_queue_depth            32
    vfs.zfs.vdev.queue_depth_pct            1000
    vfs.zfs.vdev.write_gap_limit            4096
    vfs.zfs.vdev.read_gap_limit             32768
    vfs.zfs.vdev.aggregation_limit_non_rotating131072
    vfs.zfs.vdev.aggregation_limit          1048576
    vfs.zfs.vdev.initializing_max_active    1
    vfs.zfs.vdev.initializing_min_active    1
    vfs.zfs.vdev.removal_max_active         2
    vfs.zfs.vdev.removal_min_active         1
    vfs.zfs.vdev.trim_max_active            64
    vfs.zfs.vdev.trim_min_active            1
    vfs.zfs.vdev.scrub_max_active           2
    vfs.zfs.vdev.scrub_min_active           1
    vfs.zfs.vdev.async_write_max_active     10
    vfs.zfs.vdev.async_write_min_active     1
    vfs.zfs.vdev.async_read_max_active      3
    vfs.zfs.vdev.async_read_min_active      1
    vfs.zfs.vdev.sync_write_max_active      10
    vfs.zfs.vdev.sync_write_min_active      10
    vfs.zfs.vdev.sync_read_max_active       10
    vfs.zfs.vdev.sync_read_min_active       10
    vfs.zfs.vdev.max_active                 1000
    vfs.zfs.vdev.async_write_active_max_dirty_percent60
    vfs.zfs.vdev.async_write_active_min_dirty_percent30
    vfs.zfs.vdev.mirror.non_rotating_seek_inc1
    vfs.zfs.vdev.mirror.non_rotating_inc    0
    vfs.zfs.vdev.mirror.rotating_seek_offset1048576
    vfs.zfs.vdev.mirror.rotating_seek_inc   5
    vfs.zfs.vdev.mirror.rotating_inc        0
    vfs.zfs.vdev.trim_on_init               1
    vfs.zfs.vdev.bio_delete_disable         0
    vfs.zfs.vdev.bio_flush_disable          0
    vfs.zfs.vdev.cache.bshift               16
    vfs.zfs.vdev.cache.size                 0
    vfs.zfs.vdev.cache.max                  16384
    vfs.zfs.vdev.validate_skip              0
    vfs.zfs.vdev.max_ms_shift               38
    vfs.zfs.vdev.default_ms_shift           29
    vfs.zfs.vdev.max_ms_count_limit         131072
    vfs.zfs.vdev.min_ms_count               16
    vfs.zfs.vdev.max_ms_count               200
    vfs.zfs.vdev.trim_max_pending           10000
    vfs.zfs.txg.timeout                     5
    vfs.zfs.trim.enabled                    1
    vfs.zfs.trim.max_interval               1
    vfs.zfs.trim.timeout                    30
    vfs.zfs.trim.txg_delay                  32
    vfs.zfs.space_map_ibs                   14
    vfs.zfs.spa_allocators                  4
    vfs.zfs.spa_min_slop                    134217728
    vfs.zfs.spa_slop_shift                  5
    vfs.zfs.spa_asize_inflation             24
    vfs.zfs.deadman_enabled                 1
    vfs.zfs.deadman_checktime_ms            60000
    vfs.zfs.deadman_synctime_ms             600000
    vfs.zfs.debug_flags                     0
    vfs.zfs.debugflags                      0
    vfs.zfs.recover                         0
    vfs.zfs.spa_load_verify_data            1
    vfs.zfs.spa_load_verify_metadata        1
    vfs.zfs.spa_load_verify_maxinflight     10000
    vfs.zfs.max_missing_tvds_scan           0
    vfs.zfs.max_missing_tvds_cachefile      2
    vfs.zfs.max_missing_tvds                0
    vfs.zfs.spa_load_print_vdev_tree        0
    vfs.zfs.ccw_retry_interval              300
    vfs.zfs.check_hostid                    1
    vfs.zfs.mg_fragmentation_threshold      85
    vfs.zfs.mg_noalloc_threshold            0
    vfs.zfs.condense_pct                    200
    vfs.zfs.metaslab_sm_blksz               4096
    vfs.zfs.metaslab.bias_enabled           1
    vfs.zfs.metaslab.lba_weighting_enabled  1
    vfs.zfs.metaslab.fragmentation_factor_enabled1
    vfs.zfs.metaslab.preload_enabled        1
    vfs.zfs.metaslab.preload_limit          3
    vfs.zfs.metaslab.unload_delay           8
    vfs.zfs.metaslab.load_pct               50
    vfs.zfs.metaslab.min_alloc_size         33554432
    vfs.zfs.metaslab.df_free_pct            4
    vfs.zfs.metaslab.df_alloc_threshold     131072
    vfs.zfs.metaslab.debug_unload           0
    vfs.zfs.metaslab.debug_load             0
    vfs.zfs.metaslab.fragmentation_threshold70
    vfs.zfs.metaslab.force_ganging          16777217
    vfs.zfs.free_bpobj_enabled              1
    vfs.zfs.free_max_blocks                 18446744073709551615
    vfs.zfs.zfs_scan_checkpoint_interval    7200
    vfs.zfs.zfs_scan_legacy                 0
    vfs.zfs.no_scrub_prefetch               0
    vfs.zfs.no_scrub_io                     0
    vfs.zfs.resilver_min_time_ms            3000
    vfs.zfs.free_min_time_ms                1000
    vfs.zfs.scan_min_time_ms                1000
    vfs.zfs.scan_idle                       50
    vfs.zfs.scrub_delay                     4
    vfs.zfs.resilver_delay                  2
    vfs.zfs.top_maxinflight                 32
    vfs.zfs.delay_scale                     500000
    vfs.zfs.delay_min_dirty_percent         60
    vfs.zfs.dirty_data_sync_pct             20
    vfs.zfs.dirty_data_max_percent          10
    vfs.zfs.dirty_data_max_max              4294967296
    vfs.zfs.dirty_data_max                  4294967296
    vfs.zfs.max_recordsize                  1048576
    vfs.zfs.default_ibs                     15
    vfs.zfs.default_bs                      9
    vfs.zfs.zfetch.array_rd_sz              1048576
    vfs.zfs.zfetch.max_idistance            67108864
    vfs.zfs.zfetch.max_distance             8388608
    vfs.zfs.zfetch.min_sec_reap             2
    vfs.zfs.zfetch.max_streams              8
    vfs.zfs.prefetch_disable                0
    vfs.zfs.send_holes_without_birth_time   1
    vfs.zfs.mdcomp_disable                  0
    vfs.zfs.per_txg_dirty_frees_percent     30
    vfs.zfs.nopwrite_enabled                1
    vfs.zfs.dedup.prefetch                  1
    vfs.zfs.dbuf_cache_lowater_pct          10
    vfs.zfs.dbuf_cache_hiwater_pct          10
    vfs.zfs.dbuf_metadata_cache_overflow    0
    vfs.zfs.dbuf_metadata_cache_shift       6
    vfs.zfs.dbuf_cache_shift                5
    vfs.zfs.dbuf_metadata_cache_max_bytes   766735680
    vfs.zfs.dbuf_cache_max_bytes            1533471360
    vfs.zfs.arc_min_prescient_prefetch_ms   6
    vfs.zfs.arc_min_prefetch_ms             1
    vfs.zfs.l2c_only_size                   0
    vfs.zfs.mfu_ghost_data_esize            13093484032
    vfs.zfs.mfu_ghost_metadata_esize        0
    vfs.zfs.mfu_ghost_size                  13093484032
    vfs.zfs.mfu_data_esize                  29637709824
    vfs.zfs.mfu_metadata_esize              6330880
    vfs.zfs.mfu_size                        31624126976
    vfs.zfs.mru_ghost_data_esize            6449266688
    vfs.zfs.mru_ghost_metadata_esize        0
    vfs.zfs.mru_ghost_size                  6449266688
    vfs.zfs.mru_data_esize                  11138598400
    vfs.zfs.mru_metadata_esize              1773568
    vfs.zfs.mru_size                        11956656640
    vfs.zfs.anon_data_esize                 0
    vfs.zfs.anon_metadata_esize             0
    vfs.zfs.anon_size                       290816
    vfs.zfs.l2arc_norw                      1
    vfs.zfs.l2arc_feed_again                1
    vfs.zfs.l2arc_noprefetch                0
    vfs.zfs.l2arc_feed_min_ms               200
    vfs.zfs.l2arc_feed_secs                 1
    vfs.zfs.l2arc_headroom                  2
    vfs.zfs.l2arc_write_boost               400000000
    vfs.zfs.l2arc_write_max                 400000000
    vfs.zfs.arc_meta_limit                  12267770880
    vfs.zfs.arc_free_target                 260846
    vfs.zfs.arc_kmem_cache_reap_retry_ms    1000
    vfs.zfs.compressed_arc_enabled          1
    vfs.zfs.arc_grow_retry                  60
    vfs.zfs.arc_shrink_shift                7
    vfs.zfs.arc_average_blocksize           8192
    vfs.zfs.arc_no_grow_shift               5
    vfs.zfs.arc_min                         6133885440
    vfs.zfs.arc_max                         49071083520
    vfs.zfs.abd_chunk_size                  4096
    vfs.zfs.abd_scatter_enabled             1
                                Page:  7
------------------------------------------------------------------------

arcstat.py 1 20
results...
Code:
3 runs:
1- idle
2- nfs dd read file 277MB/s result
3- local dd read file 2816MB/s result. (note finished before the command finished.)

[root@freenastest /]# arcstat.py 1 20
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
14:42:07   69M  2.5M      3  2.2M    3  219K   15  2.1M    3    40G   40G
14:42:08     5     4     80     4   80     0    0     0    0    40G   40G
14:42:09  1.6K     4      0     4    0     0    0     0    0    40G   40G
14:42:10   251     4      1     4    1     0    0     0    0    40G   40G
14:42:11   125     7      5     7    5     0    0     3    2    40G   40G
14:42:12     5     4     80     4   80     0    0     0    0    40G   40G
14:42:13     5     4     80     4   80     0    0     0    0    40G   40G
14:42:14     5     4     80     4   80     0    0     0    0    40G   40G
14:42:16     5     4     80     4   80     0    0     0    0    40G   40G
14:42:17    15     6     40     6   40     0    0     2   20    40G   40G
14:42:18     5     4     80     4   80     0    0     0    0    40G   40G
14:42:19  1.3K     4      0     4    0     0    0     0    0    40G   40G
14:42:20    23     4     17     4   17     0    0     0    0    40G   40G
14:42:21   19K  1.8K      9  1.8K    9     0    0  1.8K    8    40G   40G
14:42:22     5     4     80     4   80     0    0     0    0    40G   40G
14:42:23     5     4     80     4   80     0    0     0    0    40G   40G
14:42:24     5     4     80     4   80     0    0     0    0    40G   40G
14:42:25     5     4     80     4   80     0    0     0    0    40G   40G
14:42:26    15     6     40     6   40     0    0     2   20    40G   40G
14:42:27     5     4     80     4   80     0    0     0    0    40G   40G
[root@freenastest /]# arcstat.py 1 20
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
14:43:14   69M  2.5M      3  2.2M    3  219K   15  2.1M    3    40G   40G
14:43:15  7.8K     4      0     4    0     0    0     0    0    40G   40G
14:43:16  7.6K     5      0     5    0     0    0     1    0    40G   40G
14:43:17  8.0K     5      0     5    0     0    0     1    0    40G   40G
14:43:18  7.9K     4      0     4    0     0    0     0    0    40G   40G
14:43:19  8.8K     4      0     4    0     0    0     0    0    40G   40G
14:43:20  8.5K    14      0    14    0     0    0     5    0    40G   40G
14:43:21  7.3K    22      0    22    0     0    0     3    0    40G   40G
14:43:22  7.4K     5      0     5    0     0    0     1    0    40G   40G
14:43:23  7.6K     4      0     4    0     0    0     0    0    40G   40G
14:43:24  7.9K     4      0     4    0     0    0     0    0    40G   40G
14:43:26  7.8K     4      0     4    0     0    0     0    0    40G   40G
14:43:27  7.8K     5      0     5    0     0    0     1    0    40G   40G
14:43:28  7.7K     5      0     5    0     0    0     1    0    40G   40G
14:43:29  9.6K     4      0     4    0     0    0     0    0    40G   40G
14:43:30  8.0K     4      0     4    0     0    0     0    0    40G   40G
14:43:31  7.8K     4      0     4    0     0    0     0    0    40G   40G
14:43:32  7.9K     5      0     5    0     0    0     1    0    40G   40G
14:43:33  8.1K     5      0     5    0     0    0     1    0    40G   40G
14:43:34  7.9K     4      0     4    0     0    0     0    0    40G   40G
[root@freenastest /]# arcstat.py 1 20
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
14:46:10   70M  2.5M      3  2.2M    3  219K   15  2.1M    3    40G   40G
14:46:11   20K     6      0     6    0     0    0     2    0    40G   40G
14:46:12   21K     4      0     4    0     0    0     0    0    40G   40G
14:46:13   22K     4      0     4    0     0    0     0    0    40G   40G
14:46:14   21K     4      0     4    0     0    0     0    0    40G   40G
14:46:15   22K     5      0     5    0     0    0     1    0    40G   40G
14:46:16   22K     5      0     5    0     0    0     1    0    40G   40G
14:46:17   23K     4      0     4    0     0    0     0    0    40G   40G
14:46:18   23K     4      0     4    0     0    0     0    0    40G   40G
14:46:19   23K     4      0     4    0     0    0     0    0    40G   40G
14:46:20  7.4K     5      0     5    0     0    0     1    0    40G   40G
14:46:21   26K  2.0K      7  2.0K    7     0    0  2.0K    7    40G   40G
14:46:22     5     4     80     4   80     0    0     0    0    40G   40G
14:46:23     5     4     80     4   80     0    0     0    0    40G   40G
14:46:24     5     4     80     4   80     0    0     0    0    40G   40G
14:46:25    10     5     50     5   50     0    0     1   20    40G   40G
14:46:26    10     5     50     5   50     0    0     1   20    40G   40G
14:46:27     5     4     80     4   80     0    0     0    0    40G   40G
14:46:28     5     4     80     4   80     0    0     0    0    40G   40G
14:46:29  1.1K     5      0     5    0     0    0     1    0    40G   40G
[root@freenastest /]# 

Any other detail, just get back to me..
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
L2 ARC Breakdown: 2.36m Hit Ratio: 0.00% 9 Miss Ratio: 100.00% 2.36m Feeds: 153.53k
This bit tells some kind of story... your L2ARC is never hit.

You could change the 20 to a larger integer (1 is for 1 second intervals and 20 is how many of those to report, so if your dd runs for longer than 20 seconds...).
 

James Gardiner

Dabbler
Joined
Jul 14, 2017
Messages
19
I am new to this, but looking at the stats myself, the L2ARC does NOT appear to be the problem.
When reading over NFS, I am still getting hits to the L2ARC, its just not serving the data very quickly.
So my guess is this is more a NFS share configuration issue. And put to the sages on this site, is there any configuration options that specifically effect hosting large files?
As iperf3 runs full tilt, could it be Switch related? Am using a Mikrotik 16 port switch on factory default to connecting the freeNAS to the client in this case.. Could different MTU or packet sizes used over iperf3 testing be a cause?

I have been unable to find similar problem via google, and as such a likely cause/solution so far...
 
Top