ARC using only small percentage of RAM

Status
Not open for further replies.

BrentI

Dabbler
Joined
Nov 29, 2014
Messages
12
We have a FreeNAS system that is used for NFS file storage and is kept busy with a lot of random read and writes.
It is using about 27TB of it's 36TB of storage however I have noticed only a small percentage (12.5%) of RAM is being used for the ARC.

Does anyone have any ideas why this may be? I thought it generally used all the memory that was available.
Most of the time there is about 80Mbps of read/write traffic going through.

This is the output from arc_summary.py
System Memory:

0.16% 77.24 MiB Active, 1.47% 704.11 MiB Inact
16.13% 7.52 GiB Wired, 0.00% 0 Bytes Cache
82.23% 38.36 GiB Free, 0.00% 0 Bytes Gap

Real Installed: 48.00 GiB
Real Available: 99.78% 47.89 GiB
Real Managed: 97.39% 46.65 GiB

Logical Total: 48.00 GiB
Logical Used: 18.65% 8.95 GiB
Logical Free: 81.35% 39.05 GiB

Kernel Memory: 488.46 MiB
Data: 94.59% 462.05 MiB
Text: 5.41% 26.41 MiB

Kernel Memory Map: 46.65 GiB
Size: 12.26% 5.72 GiB
Free: 87.74% 40.93 GiB
Page: 1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Storage pool Version: 5000
Filesystem Version: 5
Memory Throttle Count: 0

ARC Misc:
Deleted: 230.96m
Mutex Misses: 39.09k
Evict Skips: 39.09k

ARC Size: 12.51% 5.71 GiB
Target Size: (Adaptive) 12.50% 5.71 GiB
Min Size (Hard Limit): 12.50% 5.71 GiB
Max Size (High Water): 8:1 45.65 GiB

ARC Size Breakdown:
Recently Used Cache Size: 93.69% 5.35 GiB
Frequently Used Cache Size: 6.31% 368.94 MiB

ARC Hash Breakdown:
Elements Max: 2.95m
Elements Current: 16.34% 482.05k
Collisions: 75.43m
Chain Max: 7
Chains: 13.28k
Page: 2
------------------------------------------------------------------------

ARC Total accesses: 2.85b
Cache Hit Ratio: 85.28% 2.43b
[root@tga-backup] ~# arc_summary.py | more
System Memory:

0.16% 77.37 MiB Active, 1.47% 703.98 MiB Inact
16.08% 7.50 GiB Wired, 0.00% 0 Bytes Cache
82.29% 38.38 GiB Free, 0.00% 0 Bytes Gap

Real Installed: 48.00 GiB
Real Available: 99.78% 47.89 GiB
Real Managed: 97.39% 46.65 GiB

Logical Total: 48.00 GiB
Logical Used: 18.60% 8.93 GiB
Logical Free: 81.40% 39.07 GiB

Kernel Memory: 488.77 MiB
Data: 94.60% 462.35 MiB
Text: 5.40% 26.41 MiB

Kernel Memory Map: 46.65 GiB
Size: 12.22% 5.70 GiB
Free: 87.78% 40.95 GiB
Page: 1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Storage pool Version: 5000
Filesystem Version: 5
Memory Throttle Count: 0

ARC Misc:
Deleted: 230.97m
Mutex Misses: 39.11k
Evict Skips: 39.11k

ARC Size: 12.49% 5.70 GiB
Target Size: (Adaptive) 12.50% 5.71 GiB
Min Size (Hard Limit): 12.50% 5.71 GiB
Max Size (High Water): 8:1 45.65 GiB

ARC Size Breakdown:
Recently Used Cache Size: 93.75% 5.35 GiB
Frequently Used Cache Size: 6.25% 365.17 MiB

ARC Hash Breakdown:
Elements Max: 2.95m
Elements Current: 16.34% 481.98k
Collisions: 75.43m
Chain Max: 7
Chains: 13.27k
Page: 2
------------------------------------------------------------------------

ARC Total accesses: 2.85b
Cache Hit Ratio: 85.28% 2.43b
Cache Miss Ratio: 14.72% 418.96m
Actual Hit Ratio: 84.46% 2.40b

Data Demand Efficiency: 89.21% 1.70b
Data Prefetch Efficiency: 34.52% 1.37m

CACHE HITS BY CACHE LIST:
Anonymously Used: 0.79% 19.08m
Most Recently Used: 39.07% 947.83m
Most Frequently Used: 59.98% 1.46b
Most Recently Used Ghost: 0.12% 3.03m
Most Frequently Used Ghost: 0.04% 1.02m

CACHE HITS BY DATA TYPE:
Demand Data: 62.50% 1.52b
Prefetch Data: 0.02% 472.74k
Demand Metadata: 36.10% 875.83m
Prefetch Metadata: 1.38% 33.56m

CACHE MISSES BY DATA TYPE:
Demand Data: 43.79% 183.47m
Prefetch Data: 0.21% 896.62k
Demand Metadata: 52.93% 221.75m
Prefetch Metadata: 3.07% 12.84m
Page: 3
------------------------------------------------------------------------

Page: 4
------------------------------------------------------------------------

DMU Prefetch Efficiency: 6.50m
Hit Ratio: 10.26% 667.17k
Miss Ratio: 89.74% 5.83m
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Please provide hardware specifications as per the forum rules. Also describe your usage scenarios to help identify the issue.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Your ARC has shrunk to its minimum size :-/
 

BrentI

Dabbler
Joined
Nov 29, 2014
Messages
12
Sorry about not supplying HW specs
16GB RAM
10x WD Red 4TB drives (1x RAID-Z2)
LSI 9211-8i disk controllers
Onboard NIC & Intel Pro 1000

I agree that it has shrunk to it's minimum, but why would it shrink to it's minimum when there is plenty of I/O still happening. I thought it would use as much ARC as possible.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
I would guess the ARC algorithm has decided your access pattern doesn't benefit from more cache. Cache is not helpful for lots of random I/O.

You can also see it has devoted most of the ARC to recent data, and almost none to frequent data, which makes sense for random I/O.

If your I/O became less random, the ARC would probably grow.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I'm not an expert, but that doesn't seem to be a likely scenario. There is no reason to throw away ARC allocation unless it's already full and it needs to allocate for newer blocks. I don't understand why an ARC would ever shrink if the system itself doesn't need the memory.

The only time I've seen an ARC shrink was what I think was a memory leak on some of the Freenas versions earlier this year. The inactive memory would continually grow. Though this case doesn't seem to have that characteristic given the data above. (And the issue I'm talking about seems to be fixed in the recent 9.10 U2 versions.)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Right, and with my digging into swap/inactive/arc bugs, one of the recurring themes is where On of the subsystems battles the other until one of them ends up shrinking/growing to a limit.

I don't know why, but I'd guess you've found a bug.
 

BrentI

Dabbler
Joined
Nov 29, 2014
Messages
12
Thanks guys for your feedback.
I will schedule downtime to upgrade to U2 if there were bugs fixed that may assist with this.
 
Status
Not open for further replies.
Top