ARC Seems small?

Status
Not open for further replies.

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Hey all,

So in my reading I have come across many references to ARC being ~7/8 of total system RAM.

In my system (disregard sig below, system is in flux) I currently have one pool with two 6 disk RAIDz2 vdevs and a mirrored SLOG. See below:

Code:
  pool: zfshome
 state: ONLINE
  scan: resilvered 1.28T in 10h9m with 0 errors on Mon Aug 25 05:36:39 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        zfshome                                         ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/85faf71f-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/86d3925a-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/87a4d43b-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/887d5e7f-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/89409ac9-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/3db34343-2bff-11e4-b231-001517168acc  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/56fb015b-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/576cde68-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/57dbbac1-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/584a4dcc-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/58f4ec2f-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/5a0a813f-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
        logs
          mirror-2                                      ONLINE       0     0     0
            gptid/0053fa01-2bfd-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/007bf444-2bfd-11e4-be49-001517168acc  ONLINE       0     0     0

errors: No known data errors


Total system RAM is 72GB, but both in top and in the arc_summary.py output, by arc is only 51GB. If the 7/8 rule were true, I should be at 63GB.

Any idea why this is? See below for details (no idea why it is reporting 80GB total ram. Definitely only 72GB installed as verified by physical count and verified in BIOS and memtest86+

Thanks,
Matt

Code:
System Memory:

        0.24%   171.39  MiB Active,     0.13%   89.70   MiB Inact
        76.50%  53.42   GiB Wired,      0.00%   1.12    MiB Cache
        23.13%  16.16   GiB Free,       0.00%   752.00  KiB Gap

        Real Installed:                         80.00   GiB
        Real Available:                 89.97%  71.98   GiB
        Real Managed:                   97.02%  69.84   GiB

        Logical Total:                          80.00   GiB
        Logical Used:                   79.69%  63.76   GiB
        Logical Free:                   20.31%  16.24   GiB

Kernel Memory:                                  593.01  MiB
        Data:                           96.11%  569.96  MiB
        Text:                           3.89%   23.05   MiB

Kernel Memory Map:                              67.61   GiB
        Size:                           74.87%  50.61   GiB
        Free:                           25.13%  16.99   GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                165.57m
        Recycle Misses:                         2.37m
        Mutex Misses:                           9.26k
        Evict Skips:                            9.26k

ARC Size:                               74.10%  51.01   GiB
        Target Size: (Adaptive)         74.10%  51.00   GiB
        Min Size (Hard Limit):          12.50%  8.60    GiB
        Max Size (High Water):          8:1     68.84   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       93.75%  47.82   GiB
        Frequently Used Cache Size:     6.25%   3.19    GiB

ARC Hash Breakdown:
        Elements Max:                           1.56m
        Elements Current:               95.55%  1.49m
        Collisions:                             64.60m
        Chain Max:                              18
        Chains:                                 310.34k
                                                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                                     283.21m
        Cache Hit Ratio:                69.78%  197.62m
        Cache Miss Ratio:               30.22%  85.59m
        Actual Hit Ratio:               68.76%  194.74m

        Data Demand Efficiency:         99.52%  105.98m
        Data Prefetch Efficiency:       3.08%   85.89m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             0.85%   1.68m
          Most Recently Used:           69.61%  137.57m
          Most Frequently Used:         28.93%  57.17m
          Most Recently Used Ghost:     0.26%   521.38k
          Most Frequently Used Ghost:   0.34%   679.21k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  53.37%  105.48m
          Prefetch Data:                1.34%   2.65m
          Demand Metadata:              45.17%  89.26m
          Prefetch Metadata:            0.12%   234.50k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  0.59%   506.09k
          Prefetch Data:                97.26%  83.25m
          Demand Metadata:              2.07%   1.77m
          Prefetch Metadata:            0.08%   66.62k
                                                                Page:  3
------------------------------------------------------------------------

                                                                Page:  4
------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)DMU Efficiency:                                   499.63m
        Hit Ratio:                      98.21%  490.67m
        Miss Ratio:                     1.79%   8.96m

        Colinear:                               8.96m
          Hit Ratio:                    0.02%   2.24k
          Miss Ratio:                   99.98%  8.96m

        Stride:                                 409.58m
          Hit Ratio:                    100.00% 409.58m
          Miss Ratio:                   0.00%   2.06k

DMU Misc:
        Reclaim:                                8.96m
          Successes:                    0.77%   68.63k
          Failures:                     99.23%  8.89m

        Streams:                                81.09m
          +Resets:                      0.00%   1.95k
          -Resets:                      100.00% 81.09m
          Bogus:                                0
                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           384
        vm.kmem_size                            74985517056
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        329853485875
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_lsize            46490845184
        vfs.zfs.mfu_ghost_metadata_lsize        4209311232
        vfs.zfs.mfu_ghost_size                  50700287488
        vfs.zfs.mfu_data_lsize                  3118072320
        vfs.zfs.mfu_metadata_lsize              163840
        vfs.zfs.mfu_size                        3131036160
        vfs.zfs.mru_ghost_data_lsize            977477632
        vfs.zfs.mru_ghost_metadata_lsize        3091430912
        vfs.zfs.mru_ghost_size                  4068908544
        vfs.zfs.mru_data_lsize                  50370951680
        vfs.zfs.mru_metadata_lsize              476672
        vfs.zfs.mru_size                        50690304000
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       36716544
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  18477943808
        vfs.zfs.arc_meta_used                   1240025272
        vfs.zfs.arc_min                         9238971904
        vfs.zfs.arc_max                         73911775232
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.write_to_degraded               0
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.mg_alloc_failures               9
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.weight_factor_enable   0
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         10485760
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.gang_bang              131073
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.deadman_enabled                 0
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.recover                         0
        vfs.zfs.txg.timeout                     5
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.larger_ashift_minimal      0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.trim_max_pending           64
        vfs.zfs.vdev.trim_max_bytes             2147483648
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.snapshot_list_prefetch          0
        vfs.zfs.version.ioctl                   3
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.vol.mode                        2
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
                                                                Page:  7
------------------------------------------------------------------------
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ZFS will only use what:

1. It can actually allocate.. if you are doing FTP, NFS, and all sorts of other stuff you obviously can't use that for ZFS.
2. What it think it needs.


It may be that you have only read 51GB of data since mounting the pool too. If you do dd tests it will likely grow. If you then delete the dd file you created it'll immediately shrink because those blocks are freed.

But, your arc_max is set to 73911775232 so ZFS will definitely use up to that many bytes. So I'd say "nothing to look at here" unless you are about to admit to tweaking ZFS or something.. ;)
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
ZFS will only use what:

1. It can actually allocate.. if you are doing FTP, NFS, and all sorts of other stuff you obviously can't use that for ZFS.
2. What it think it needs.


It may be that you have only read 51GB of data since mounting the pool too. If you do dd tests it will likely grow. If you then delete the dd file you created it'll immediately shrink because those blocks are freed.

Ahh, thanks for that input. Just trying to make sure everything is working correctly!

But, your arc_max is set to 73911775232 so ZFS will definitely use up to that many bytes. So I'd say "nothing to look at here" unless you are about to admit to tweaking ZFS or something.. ;)

Nope, no tweaking here...

...well, except for one tuneable.

zil_slog_limit needed to go up. The stock 1meg seemed kind of silly, and from monitoring zpool iostat, the system is making good use of the extra space too!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
zil_slog_limit?

[root@mini] ~# sysctl -a | grep zil
vfs.zfs.zil_replay_disable: 0

Doesn't appear to exist to me.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
zil_slog_limit?

[root@mini] ~# sysctl -a | grep zil
vfs.zfs.zil_replay_disable: 0

Doesn't appear to exist to me.
Interesting.

Didn't know you could look them up like that.

There are many blog and forum posts that discuss zil_slog_limit, I just added it manually...

Apparently it behaves like this:
What zil_slog_limit does is turn off use of the SLOG for large ZIL commits or large total ZIL log sizes. If the current ZIL commit is over zil_slog_limit or the current total ZIL log size is over twice zil_slog_limit, the ZIL commit is not written to your SLOG device but instead is written into the main pool.

By default it is supposed to be set to only 1 MB, which may have made sense when flash was very expensive, but today it seems silly.

Maybe it is deprecated or not implemented in the BSD version of ZFS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm guessing it's not implemented in FreeBSD. There may be some alternative though. I can't say for sure. I just know you mentioned that and I had this blind stare like "what is that? I've never seen that one before!?" and then I had to run and check.

What I do know is that whatever you were setting is actually doing nothing at all. This is one of those examples where I tell people you can't trust all that you read because much of it isn't applicable to FreeBSD for one reason or another. Unless you plan to spend a few years reading stuff that you know is for FreeBSD and actually digging into the ZFS code you are pretty unprepared to even consider doing any ZFS tuning. You've been a first-hand account of what happens. ;)

I won't lie, I try to avoid tuning ZFS if possible. It's not easy, it can take weeks to months of work to get right, and there's no guarantee you'll ever get the answer you want. For 99% of the users in the forum, if the server is too slow it's far far far far far easier to slap in more RAM, a ZIL, or L2ARC than try to tweak it to fit in whatever limited hardware you have. Spend $1000 on some hardware or spend weeks and weeks jerking around with your server while you crash it left and right trying to learn (and it almost always has to be "in production" to test your tweaks). We've had people spend 3-4 solid months tweaking ZFS and I eventually have to ask them "at what point would it have been cheaper for your business to just buy the additional hardware instead of paying your salary for 4 months to still not solve the problem?" I bet for most companies that threshold for financial gains was less than 2 weeks. So why were they so stupid that they couldn't buy 2 more sticks of RAM and instead paid someone to spend 4 months of their life trying to get ZFS working only to eventually throw in the towel anyway.
 
Status
Not open for further replies.
Top