ARC stats questions/problems thread

Status
Not open for further replies.

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
It's ok ;)
 

Jacopx

Patron
Joined
Feb 19, 2016
Messages
367
I can't run the scripts and i don't know why...
The output:
Code:
[root@FreeNAS /mnt/WDVolume_A/Data/scripts]# bash arc_stats.sh                 
arc_stats.sh: line 2: $'\r': command not found                                 
arc_stats.sh: line 5: $'\r': command not found                                 
                                                                               
  • Data, Video, Photos, T.M. Backups
  • 3:01PM up 1 day, 5:10, 1 user, load averages: 0.23, 0.23, 0.19 arc_stats.sh: line 10: $'\r': command not found arc_stats.sh: line 12: syntax error near unexpected token `$'do\r'' 'rc_stats.sh: line 12: `do [root@FreeNAS /mnt/WDVolume_A/Data/scripts]#
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Guessing you pasted on a Windows system? There's extraneous \r characters (carriage return) mixed in with your \n's (newline)...

Try:

# tr -d '\015' < arc_stats.sh > arc_stats_fixed.sh
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
And you shouldn't use bash arc_stats.sh but just ./arc_stats.sh ;)
 

Jacopx

Patron
Joined
Feb 19, 2016
Messages
367
And you shouldn't use bash arc_stats.sh but just ./arc_stats.sh ;)

I have already tried it but:
Code:
[root@FreeNAS /mnt/WDVolume_A/Data/scripts]# ./arc_stats.sh                    
bash: ./arc_stats.sh: Permission denied 


The owner of the dataset is root and i'm try to execute it with root user... :/
 

Jacopx

Patron
Joined
Feb 19, 2016
Messages
367
Guessing you pasted on a Windows system? There's extraneous \r characters (carriage return) mixed in with your \n's (newline)...

Try:

# tr -d '\015' < arc_stats.sh > arc_stats_fixed.sh

Now it's working! I'm coming from OSX system! Can you explain to me what have you change with this command?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Now it's working! I'm coming from OSX system! Can you explain to me what have you change with this command?

"tr" - UNIX translate characters
"-d" - don't translate, rather, delete
"'\015'" - octal code for carriage return
"<" - input redirect
">" - output redirect
 

Jacopx

Patron
Joined
Feb 19, 2016
Messages
367
"tr" - UNIX translate characters
"-d" - don't translate, rather, delete
"'\015'" - octal code for carriage return
"<" - input redirect
">" - output redirect

Really good jobs! Thanks :D
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
The owner of the dataset is root and i'm try to execute it with root user... :/

That's probably because the file isn't executable, try a chmod +x arc_stats.sh ;)
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Thanks ;)
 

m3ki

Contributor
Joined
Jun 20, 2016
Messages
118
I was wondering if this is a right place to ask.

It seems my arc stats have been going down steadily past month or so.

Anyone can help?

Here is output of my arc_summary.py

Code:
arc_summary.py
System Memory:

        0.61%   192.87  MiB Active,     4.20%   1.31    GiB Inact
        93.95%  29.22   GiB Wired,      0.01%   3.33    MiB Cache
        1.23%   393.14  MiB Free,       0.00%   0       Bytes Gap

        Real Installed:                         32.00   GiB
        Real Available:                 99.79%  31.93   GiB
        Real Managed:                   97.41%  31.10   GiB

        Logical Total:                          32.00   GiB
        Logical Used:                   94.71%  30.31   GiB
        Logical Free:                   5.29%   1.69    GiB

Kernel Memory:                                  395.89  MiB
        Data:                           93.21%  369.02  MiB
        Text:                           6.79%   26.88   MiB

Kernel Memory Map:                              31.10   GiB
        Size:                           88.37%  27.49   GiB
        Free:                           11.63%  3.62    GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                15.14m
        Mutex Misses:                           2.81k
        Evict Skips:                            2.81k

ARC Size:                               88.10%  26.52   GiB
        Target Size: (Adaptive)         88.18%  26.55   GiB
        Min Size (Hard Limit):          12.50%  3.76    GiB
        Max Size (High Water):          8:1     30.10   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       63.60%  16.88   GiB
        Frequently Used Cache Size:     36.40%  9.66    GiB

ARC Hash Breakdown:
        Elements Max:                           658.06k
        Elements Current:               99.13%  652.30k
        Collisions:                             3.77m
        Chain Max:                              6
        Chains:                                 45.69k
                                                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                                     459.71m
        Cache Hit Ratio:                63.90%  293.77m
        Cache Miss Ratio:               36.10%  165.94m
        Actual Hit Ratio:               62.04%  285.22m

        Data Demand Efficiency:         80.88%  271.41m
        Data Prefetch Efficiency:       48.70%  16.93m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             2.73%   8.03m
          Most Recently Used:           9.89%   29.04m
          Most Frequently Used:         87.20%  256.18m
          Most Recently Used Ghost:     0.10%   300.29k
          Most Frequently Used Ghost:   0.08%   222.67k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  74.73%  219.52m
          Prefetch Data:                2.81%   8.25m
          Demand Metadata:              22.34%  65.64m
          Prefetch Metadata:            0.13%   367.23k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  31.27%  51.89m
          Prefetch Data:                5.23%   8.68m
          Demand Metadata:              63.36%  105.15m
          Prefetch Metadata:            0.13%   214.72k
                                                                Page:  3
------------------------------------------------------------------------

                                                                Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:                        7.99b
        Hit Ratio:                      0.45%   35.97m
        Miss Ratio:                     99.55%  7.96b

                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           2379
        vm.kmem_size                            33397690368
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.mode                        2
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   6
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.larger_ashift_minimal      0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.metaslabs_per_vdev         200
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.space_map_blksz                 4096
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.debug_flags                     0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.gang_bang              16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 18446744073709551615
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync                 67108864
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  3428664115
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_lsize            15740569600
        vfs.zfs.mfu_ghost_metadata_lsize        360044032
        vfs.zfs.mfu_ghost_size                  16100613632
        vfs.zfs.mfu_data_lsize                  11272137728
        vfs.zfs.mfu_metadata_lsize              674260992
        vfs.zfs.mfu_size                        12361233408
        vfs.zfs.mru_ghost_data_lsize            10592845824
        vfs.zfs.mru_ghost_metadata_lsize        1808963072
        vfs.zfs.mru_ghost_size                  12401808896
        vfs.zfs.mru_data_lsize                  15186720768
        vfs.zfs.mru_metadata_lsize              46192640
        vfs.zfs.mru_size                        15576815616
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       3983360
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  8080987136
        vfs.zfs.arc_free_target                 56573
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_min                         4040493568
        vfs.zfs.arc_max                         32323948544
                                                                Page:  7
------------------------------------------------------------------------



Arc_stat
  • Plex media & Backups
  • 2:50PM up 4 days, 1:53, 1 user, load averages: 0.31, 0.27, 0.26
  • 1.48GiB / 14.9GiB (freenas-boot)
  • 192GiB / 460GiB (zfast)
  • 51.6TiB / 109TiB (zroot)
  • 26.51GiB (MRU: 16.89GiB, MFU: 9.66GiB) / 32.00GiB
  • Hit ratio -> 63.86% (higher is better)
  • Prefetch -> 48.72% (higher is better)
  • Hit MFU:MRU -> 87.55%:9.62% (higher ratio is better)
  • Hit MRU Ghost -> 0.10% (lower is better)
  • Hit MFU Ghost -> 0.07% (lower is better)

One thing to note I do run plex server on it and I do store backups as well as large files on the server.


My specs
FreeNAS 9.10
MOBO: SuperMicro X10SL7-F
CPU: Intel(R) Xeon(R) CPU E3-1226 v3 @ 3.30GHz
RAM: 32 GB ECC (KVR1333D3E9SK2/16G)
HBA: 2x IBM M1015 (IT)
STORAGE: 2x 500SSD (Mirror)
STORAGE: 24 x 5TB WD RED (4 x 6xRAID-Z2)
BOOT: 2x16GB Sandisk Cruzer Fit USB 2.0
PSU: Seasonic SS-660XP2
UPS: Cyberpower CP1500AVR
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Well, you have a pool that's bigger than 100 TB and only 32 GB of RAM, that and the fact the stats are worse with time tell me you don't have enough RAM.

Your stats aren't good but they're not critical either, you can live with that if the perfs are ok for you as is ;)
 

m3ki

Contributor
Joined
Jun 20, 2016
Messages
118
Well, you have a pool that's bigger than 100 TB and only 32 GB of RAM, that and the fact the stats are worse with time tell me you don't have enough RAM.

Your stats aren't good but they're not critical either, you can live with that if the perfs are ok for you as is ;)

This system is used mostly used for backups and lots of media. Backups and Data are mostly small, it's the media library that takes a lot of space.

Media is not accessed frequently only once or twice after being written.

My ram is maxed out there is no way to add more on this mobo.
Is it possible to make arc not to cache Files larger than 3GB ?
Should I turn autotune on?
Would I benefit from L2ARC?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Is it possible to make arc not to cache Files larger than 3GB ?

AFAIK no, the cache works at the block level, not the file level.

Should I turn autotune on?

I'd say no but I may be wrong; @jgreco can help far better than me here ;)

Would I benefit from L2ARC?

Maybe, but again, jgreco will be give you better advise as you are on the edge of the minimum RAM to have before thinking about a L2ARC but on the other side it's not a VM datastore so..
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
My specs
FreeNAS 9.10
MOBO: SuperMicro X10SL7-F
CPU: Intel(R) Xeon(R) CPU E3-1226 v3 @ 3.30GHz
RAM: 32 GB ECC (KVR1333D3E9SK2/16G)
HBA: 2x IBM M1015 (IT)
STORAGE: 2x 500SSD (Mirror)
STORAGE: 24 x 5TB WD RED (4 x 6xRAID-Z2)
BOOT: 2x16GB Sandisk Cruzer Fit USB 2.0
PSU: Seasonic SS-660XP2
UPS: Cyberpower CP1500AVR

I'd be holding my breath every time I had to reboot that, given the number of disks and the PSU :o

I've got a similarly spec'ed machine (m/board, cpu, ram) with the same PSU but less than half the number of drives you're running.
 

m3ki

Contributor
Joined
Jun 20, 2016
Messages
118
I'd be holding my breath every time I had to reboot that, given the number of disks and the PSU :o

I've got a similarly spec'ed machine (m/board, cpu, ram) with the same PSU but less than half the number of drives you're running.
Well my machine has been rock solid past year. WD REDs seem to be not as power hungry.

Sent from my Nexus 6P using Tapatalk
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ah yeah, didn't noticed it but I confirm, you definitely want to do something about that. It's not because you don't have problems now that you'll not have any tomorrow and/or that you don't abuse the PSU (and being a good quality PSU it didn't died on you for now but still not a good idea).

The problem is mainly the spin-up current, it's huge, something like 2.5 A for the WD drives and 3 A for the Seagate drives (more info if your want) so with 24 drives it's around 700 W for the WDs and 850 W for the Seagates and that's just for the drives, you need to add all the others components in your system, which should be something like 100 W just eyeballing it. So, yeah, I'd not use anything under 850-900 W on this server.

In the end you do what you want but I really encourage you to look at this before it's too late.
 

m3ki

Contributor
Joined
Jun 20, 2016
Messages
118
Ah yeah, didn't noticed it but I confirm, you definitely want to do something about that. It's not because you don't have problems now that you'll not have any tomorrow and/or that you don't abuse the PSU (and being a good quality PSU it didn't died on you for now but still not a good idea).

The problem is mainly the spin-up current, it's huge, something like 2.5 A for the WD drives and 3 A for the Seagate drives (more info if your want) so with 24 drives it's around 700 W for the WDs and 850 W for the Seagates and that's just for the drives, you need to add all the others components in your system, which should be something like 100 W just eyeballing it. So, yeah, I'd not use anything under 850-900 W on this server.

In the end you do what you want but I really encourage you to look at this before it's too late.
I agree, my plan is to upgrade this server to supermicro chassis next year.

Sent from my Nexus 6P using Tapatalk
 
Status
Not open for further replies.
Top