ARC stats questions/problems thread

Status
Not open for further replies.

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Not entirely sure what any of this means, but thought I'd post them up anyway :D

  • Media, Backups (TimeMachine and Rsync copies), Jails
  • 13.4TiB / 25.4TiB (APEpool1)
  • 773MiB / 14.5GiB (freenas-boot)
  • 17.06GiB (MRU: 16.07GiB, MFU: 1.07GiB) / 32.00GiB
  • Hit ratio -> 61.86% (higher is better)
  • Prefetch -> 10.33% (higher is better)
  • Hit MFU:MRU -> 30.25%:58.47% (higher ratio is better)
  • Hit MRU Ghost -> 0.46% (lower is better)
  • Hit MFU Ghost -> 1.03% (lower is better)

Given the Hit ratio looked very low, I thought I'd check out the graph in the reporting section which looks like a rollercoaster!

ahr.jpg
 
Last edited:

JJT211

Patron
Joined
Jul 4, 2014
Messages
323
Yea and you have a backwards MFU:MRU ratio....weird

How does your system feel? When was your last reboot? Looks like its been a while. If so, that also indicates you havent taken any updates in a while. The updates on 9.3, in my exp, have been all around excellent in terms of both stability and performance.
 
Last edited:

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Feels fine :)

I've got about 15 friends and family accessing Plex, although very rarely more than 1 or 2 at the same time, and 5 owncloud users with ~50-100GB per account. All other jails working fine too!

Internally, copying files across to the pool is pretty quick and usually around 85-95MBps and replication to my backup machine seems pretty quick too (I recently did a full replication of almost 9TB in less than 48 hours)

Machine was updated a few days ago so hasn't been up for very long, but happy with the performance.
 

JJT211

Patron
Joined
Jul 4, 2014
Messages
323
Yea, sounds like you have quite the active server.

15 friends and family wow. I've got 3 or 4, and initially I thought they'd be accessing all the time. But its seems, like you, its rare when its more than 1 or 2.

BTW, whats your upload bandwidth and what quality do you have everyone set to stream? I really wish they'd have an option on the server side to control it but doesnt seem like its something they wanna do.

Anyways, I guess the arcstats arent quite as indicative as one might think.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep, 15 users, even not at the same time is a lot of different data so the ARC must renew itself quite a lot. This is why the hit ratio is low and the hit MFU:MRU ratio very very low.

It's not a big problem, especially if the perfs are ok for you, but more RAM would be useful on this server.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
I've got fibre broadband from Virgin Media, so one of the fastest consumer services in the UK. I get ~150Mbps down and ~12Mbps and it's pretty consistent. Have Plex set to stream stuff using automatic settings and haven't had many complaints!
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Streaming is a very light load, a copy is heavier for example, so no problem here. And the bottleneck seems to be the network so no problem ;)

If you'd have 10 Gbe then you'll probably see that the bottleneck is the pool so more RAM would be beneficial.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
More RAM isn't an option as it's maxed out!

Thought I'd look at this again, after a recent update and server up for 6 days:

  • Put your data type(s) here...
  • 14.2TiB / 25.4TiB (APEpool1)
  • 914MiB / 14.5GiB (freenas-boot)
  • 14.92GiB (MRU: 12.80GiB, MFU: 2.16GiB) / 32.00GiB
  • Hit ratio -> 82.78% (higher is better)
  • Prefetch -> 57.77% (higher is better)
  • Hit MFU:MRU -> 78.36%:11.79% (higher ratio is better)
  • Hit MRU Ghost -> 0.11% (lower is better)
  • Hit MFU Ghost -> 0.84% (lower is better)

Looks slightly better now, and the MFU:MRU are the right way around :D
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep, far better actually ;)
 

nanodec

Dabbler
Joined
Jan 14, 2015
Messages
34
Reposting this from the other ARC thread....

Got around for writing/running that script... hopefully this looks right:
Lots of stuff - backups, music, etc.
528MiB / 14.5GiB (freenas-boot)
375GiB / 4.53TiB (pool1)
11.47GiB (MRU: 9.96GiB, MFU: 1.56GiB) / 16.00GiB
Hit ratio -> 98.56% (higher is better)
Prefetch -> 12.38% (higher is better)
Hit MFU:MRU -> 99.69%:0.23% (higher ratio is better)
Hit MRU Ghost -> 0.00% (lower is better)
Hit MFU Ghost -> 0.00% (lower is better)

What is and why is my prefetch % so low?

When I ran this script, my box was on it's 24th day of uptime... really low load, just myself accessing this periodically for misc. uses. Wasn't sure about the prefetch % since it's only 12.38%... anyone shed any light on this?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
The prefetch percentage is the percentage of the data who has been prefetched in the ARC. Prefetched data is data who has been put in the cache before you requested it, in prevision of the future. For example if you start reading a big video file of 5 GB then (for example) you'll ask for the first GB and as you do that ZFS will guess that you'll also want the remaining 4 GB later so it'll prefetch these 4 GB.

Very light usage is likely why your prefetch % is low. It's not a problem, all the stats looks good ;)
 

mattmac24

Dabbler
Joined
Jun 27, 2011
Messages
21
So not sure why my Hit Ratio is so low. Can I do anything to improve it and improve performance? I am running Freenas on a Hp Microserver N54L with no plugins/Jails running. The only use of the storage is for Plex Media(Plex runs on a different server). Uptime is 36 days.


  • RAIDZ with 6 x 3TB WD Red Drives.
  • 8.74TiB / 16.2TiB (MAIN)
  • 3.63GiB / 14.5GiB (freenas-boot)
  • 4.52GiB (MRU: 10.29GiB, MFU: 874.81MiB) / 16.00GiB
  • Hit ratio -> 75.69% (higher is better)
  • Prefetch -> 32.96% (higher is better)
  • Hit MFU:MRU -> 98.46%:0.34% (higher ratio is better)
  • Hit MRU Ghost -> 0.01% (lower is better)
  • Hit MFU Ghost -> 0.08% (lower is better)

Thanks!
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Maybe it's because of how Plex use the data (and I don't know anything about Plex so I can't help further). But, is the perf ok right now? if yes then don't bother ;)
 

mattmac24

Dabbler
Joined
Jun 27, 2011
Messages
21
Performance is not great to be honest. Not really sure how I can improve that though. Any tips given the above info?
 

moonshine

Dabbler
Joined
Dec 4, 2014
Messages
14
Been following peripherally and I don't have exact outputs....will post later tonight. However briefly:

5x3tb WD Reds, asrock c2550d4i mobo, 16gb ecc ram.

16gb gave me an arc of approx 81% with all the other ratios being great...

Recently upgraded to 32gb ecc and after 1 day arc ratio of about 83%. Plex runs faster and loads up my libraries faster but other than that i wonder what else I need to do to optimize my arc ratio!
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
If the perfs are ok for you then you don't need to optimize anything ;)
 

David E

Contributor
Joined
Nov 1, 2013
Messages
119
I was just looking through the raw output of the arc_summary.py program and saw it listing my L2 ARC as Degraded, even though zpool status doesn't show anything negative. Does anyone else see something like this? Should I worry?

L2 ARC Summary: (DEGRADED)
Passed Headroom: 268.35m
Tried Lock Failures: 1.25m
IO In Progress: 5
Low Memory Aborts: 2.08k
Free on Write: 3.65m
Writes While Full: 1.81m
R/W Clashes: 3.48k
Bad Checksums: 14.62m
IO Errors: 11.44m
SPA Mismatch: 170.02m

And here is the full output up to and including the L2 ARC summary

System Memory:

1.88% 595.14 MiB Active, 19.03% 5.90 GiB Inact
73.31% 22.72 GiB Wired, 1.15% 366.01 MiB Cache
4.62% 1.43 GiB Free, 0.01% 1.82 MiB Gap

Real Installed: 32.00 GiB
Real Available: 99.84% 31.95 GiB
Real Managed: 97.01% 30.99 GiB

Logical Total: 32.00 GiB
Logical Used: 75.98% 24.31 GiB
Logical Free: 24.02% 7.69 GiB

Kernel Memory: 450.60 MiB
Data: 94.66% 426.52 MiB
Text: 5.34% 24.08 MiB

Kernel Memory Map: 28.18 GiB
Size: 73.78% 20.79 GiB
Free: 26.22% 7.39 GiB
Page: 1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Storage pool Version: 5000
Filesystem Version: 5
Memory Throttle Count: 0

ARC Misc:
Deleted: 556.16m
Recycle Misses: 11.46m
Mutex Misses: 722.69k
Evict Skips: 722.69k

ARC Size: 70.78% 21.23 GiB
Target Size: (Adaptive) 70.78% 21.23 GiB
Min Size (Hard Limit): 12.50% 3.75 GiB
Max Size (High Water): 8:1 29.99 GiB

ARC Size Breakdown:
Recently Used Cache Size: 92.25% 19.58 GiB
Frequently Used Cache Size: 7.75% 1.65 GiB

ARC Hash Breakdown:
Elements Max: 3.66m
Elements Current: 79.32% 2.90m
Collisions: 916.95m
Chain Max: 10
Chains: 643.22k
Page: 2
------------------------------------------------------------------------

ARC Total accesses: 10.86b
Cache Hit Ratio: 94.60% 10.27b
Cache Miss Ratio: 5.40% 585.74m
Actual Hit Ratio: 90.83% 9.86b

Data Demand Efficiency: 92.19% 3.91b
Data Prefetch Efficiency: 61.89% 719.74m

CACHE HITS BY CACHE LIST:
Anonymously Used: 3.56% 366.05m
Most Recently Used: 12.46% 1.28b
Most Frequently Used: 83.55% 8.58b
Most Recently Used Ghost: 0.06% 5.77m
Most Frequently Used Ghost: 0.37% 38.06m

CACHE HITS BY DATA TYPE:
Demand Data: 35.10% 3.60b
Prefetch Data: 4.34% 445.48m
Demand Metadata: 59.96% 6.16b
Prefetch Metadata: 0.61% 62.63m

CACHE MISSES BY DATA TYPE:
Demand Data: 52.15% 305.46m
Prefetch Data: 46.82% 274.26m
Demand Metadata: 0.49% 2.90m
Prefetch Metadata: 0.53% 3.13m
Page: 3
------------------------------------------------------------------------

L2 ARC Summary: (DEGRADED)
Passed Headroom: 268.35m
Tried Lock Failures: 1.25m
IO In Progress: 5
Low Memory Aborts: 2.08k
Free on Write: 3.65m
Writes While Full: 1.81m
R/W Clashes: 3.48k
Bad Checksums: 14.62m
IO Errors: 11.44m
SPA Mismatch: 170.02m

L2 ARC Size: (Adaptive) 240.20 GiB
Header Size: 0.18% 451.19 MiB

L2 ARC Evicts:
Lock Retries: 1.73k
Upon Reading: 108

L2 ARC Breakdown: 585.74m
Hit Ratio: 23.40% 137.05m
Miss Ratio: 76.60% 448.68m
Feeds: 4.57m

L2 ARC Buffer:
Bytes Scanned: 422.49 TiB
Buffer Iterations: 4.57m
List Iterations: 271.28m
NULL List Iterations: 214.88k

L2 ARC Writes:
Writes Sent: 100.00% 2.80m
Page: 4
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Heya,
I figured I might post in this thread too, since a quick glance at others stats, mine seems to be far worse...

I'm at 16gb now. Would 48gb suffice to "remedy" this proven use?
[edit: these stats also include the very first migration and syncing of all systems to the FreeNAS box. Mayhaps a couple of weeks of additional use, rebooted and updated to 9.10 STABLE (most recent of today) would be fair?]
  • Single user home
  • 8:45AM up 28 days, 12:55, 1 user, load averages: 0.00, 0.00, 0.00
  • 551MiB / 29.8GiB (freenas-boot)
  • 22.0TiB / 38TiB (wd60efrx)
  • 12.15GiB (MRU: 11.39GiB, MFU: 777.87MiB) / 16.00GiB
  • Hit ratio -> 75.41% (higher is better)
  • Prefetch -> 1.48% (higher is better)
  • Hit MFU:MRU -> 53.09%:43.33% (higher ratio is better)
  • Hit MRU Ghost -> 0.70% (lower is better)
  • Hit MFU Ghost -> 2.93% (lower is better)
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep, a month of uptime is fine to let the ARC warmup.

AFAICS you're short on the RAM. 32 GB should be enough, test with that and see if it's ok or if you need more ;)
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Yep, a month of uptime is fine to let the ARC warmup.

AFAICS you're short on the RAM. 32 GB should be enough, test with that and see if it's ok or if you need more ;)
I figured I'd get 2x16 Gb sticks additionally for a total of 8+8+16+16, having them working in dual channel matched pairs.
This should be fine according to the local SM vendor. Is there any FreeNAS specific shenanigans hidden somewhere that strongly suggests there should be matched sizes of the ram sticks across channels?
 
Status
Not open for further replies.
Top