Register for the iXsystems Community to get an ad-free experience

What are your ARC statistics?

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

JJT211

Patron
Joined
Jul 4, 2014
Messages
321
@KTrain
To avoid the broken lines in the script, be sure to maximize the putty window before pasting.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
8,729
Server 1
  • CIFS server. Documents, picture, movies
  • 9:20AM up 70 days, 14:49, 1 user, load averages: 0.17, 0.17, 0.17
  • 10.1TiB / 21.8TiB (Tank)
  • 37.86GiB (MRU: 17.37GiB, MFU: 20.52GiB) / 48.00GiB
  • Hit ratio -> 94.87% (higher is better)
  • Prefetch -> 13.48% (higher is better)
  • Hit MFU:MRU -> 97.88%:1.03% (higher ratio is better)
  • Hit MRU Ghost -> 0.04% (lower is better)
  • Hit MFU Ghost -> 0.46% (lower is better)

Server 2
  • ZFS replication target, hosts a couple of vbox VMs and a few jails
  • 9:15AM up 30 days, 20:14, 1 user, load averages: 0.20, 0.15, 0.09
  • 14.2TiB / 29TiB (Tank)
  • 22.71GiB (MRU: 17.41GiB, MFU: 5.34GiB) / 32.00GiB
  • Hit ratio -> 61.56% (higher is better)
  • Prefetch -> 49.47% (higher is better)
  • Hit MFU:MRU -> 99.47%:0.27% (higher ratio is better)
  • Hit MRU Ghost -> 0.00% (lower is better)
  • Hit MFU Ghost -> 0.01% (lower is better)
 
Last edited:

Pointeo13

Explorer
Joined
Apr 18, 2014
Messages
86
One last update now that I did a hardware upgrade, server has been up for 20 days, I swapped out the hardware and added 1.24TB (1048528MB) of memory.

[*]59.3TiB / 97.5TiB (Raidz_2_SATA) ***Archive Media / Backups***
[*]1.11TiB / 3.23TiB (Raidz_Mirror_SAS) ***VMware VM's***
[*]2.75GiB / 14.8GiB (freenas-boot)
[*]669.98GiB (MRU: 601.51GiB, MFU: 68.50GiB) / 1.00TiB
[*]Hit ratio -> 80.68% (higher is better)
[*]Prefetch -> 64.43% (higher is better)
[*]Hit MFU:MRU -> 68.67%:27.14% (higher ratio is better)
[*]Hit MRU Ghost -> 0.06% (lower is better)
[*]Hit MFU Ghost -> 0.59% (lower is better)
 
Last edited:

shnurov

Explorer
Joined
Jul 22, 2015
Messages
74
First build that I'm still trying to figure out how to get the best out of it! Working on it slowly. :)

  • 200'000 + photo's office server.
  • 2:11PM up 1 day, 4:01, 1 user, load averages: 0.17, 0.19, 0.17
  • 1.12GiB / 14.5GiB (freenas-boot)
  • 3.36TiB / 7.25TiB (set4tb)
  • 11.54GiB (MRU: 10.82GiB, MFU: 741.05MiB) / 16.00GiB
  • Hit ratio -> 98.34% (higher is better)
  • Prefetch -> 5.07% (higher is better)
  • Hit MFU:MRU -> 96.34%:2.38% (higher ratio is better)
  • Hit MRU Ghost -> 0.01% (lower is better)
  • Hit MFU Ghost -> 0.01% (lower is better)
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,532
[*]Everything under the sun, about 90% Multimedia files.
[*] 9:35PM up 2 days, 2:30, 3 users, load averages: 1.11, 0.73, 0.44
[*]6.29TiB / 9.97TiB (Multimedia)
[*]182GiB / 696GiB (Stuff)
[*]652MiB / 3.66GiB (freenas-boot)
[*]11.68GiB (MRU: 10.92GiB, MFU: 1.78GiB) / 32.00GiB
[*]Hit ratio -> 89.65% (higher is better)
[*]Prefetch -> 26.24% (higher is better)
[*]Hit MFU:MRU -> 80.93%:14.85% (higher ratio is better)
[*]Hit MRU Ghost -> 0.59% (lower is better)
[*]Hit MFU Ghost -> 0.99% (lower is better)


I'm on the nightly train so I reboot every week or two, this is just for personal use and my friends stream from plex. I added 16GB of RAM a few days ago so my ARC is still filling up.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,532
Turns out my max ARC size was set to 13 GB since I had autotune enabled and previously only had 16 GB of RAM in there. I deleted all the autotune values and set the max ARC size to 30 GB and here's my new stats after rebooting last night.

[*] 7:46AM up 8:27, 1 user, load averages: 0.72, 0.65, 0.58
[*]6.29TiB / 9.97TiB (Multimedia)
[*]170GiB / 696GiB (Stuff)
[*]698MiB / 3.66GiB (freenas-boot)
[*]24.32GiB (MRU: 22.82GiB, MFU: 1.52GiB) / 32.00GiB
[*]Hit ratio -> 78.42% (higher is better)
[*]Prefetch -> 1.67% (higher is better)
[*]Hit MFU:MRU -> 68.49%:25.33% (higher ratio is better)
[*]Hit MRU Ghost -> 0.07% (lower is better)
[*]Hit MFU Ghost -> 0.50% (lower is better)
 

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
  • backups, photos, home directories, music/movies
  • 5:45AM up 27 days, 2:42, 1 user, load averages: 1.41, 1.46, 1.41
  • 3.76GiB / 7.50GiB (freenas-boot)
  • 4.28TiB / 5.44TiB (nevernude)
  • 4.13GiB (MRU: 2.99GiB, MFU: 1.14GiB) / 16.00GiB
  • Hit ratio -> 88.88% (higher is better)
  • Prefetch -> 58.52% (higher is better)
  • Hit MFU:MRU -> 98.54%:0.62% (higher ratio is better)
  • Hit MRU Ghost -> 0.33% (lower is better)
  • Hit MFU Ghost -> 1.21% (lower is better)

Sandybridge system with L2ARC and modest ZIL.
 

rwhitlock

Dabbler
Joined
Aug 17, 2015
Messages
13
Thanks Bidule0hm for the script. I was a little confused about the layout of the arc row. I also thought having the “real available” ram would be useful. I also noticed for some reason in some cases the sizeMFU and sizeMRU values equal the ARC Target Size and not the ARC Size. Just something I noticed.
Code:
#!/bin/sh

arcSummary="$(python /usr/local/www/freenasUI/tools/arc_summary.py)"
pools="$(zpool list -H -o name)"

echo ""
echo "
  • " echo "
  • Put your data type(s) here..." echo "
  • $(uptime)" for pool in $pools do used="$(zpool list -H -o allocated ${pool})" total="$(zpool list -H -o size ${pool})" echo "
  • (${pool}) ${used}iB / ${total}iB" done arc="$(echo "${arcSummary}" | grep -m 1 "ARC Size:" | awk '{print $4 $5}')" sizeMRU="$(echo "${arcSummary}" | grep "Recently Used Cache Size:" | awk '{print $6 $7}')" sizeMFU="$(echo "${arcSummary}" | grep "Frequently Used Cache Size:" | awk '{print $6 $7}')" ram="$(echo "${arcSummary}" | grep "Real Installed:" | awk '{print $3 $4}')" ramava="$(echo "${arcSummary}" | grep "Real Available:" | awk '{print $3}')" hit="$(echo "${arcSummary}" | grep "Cache Hit Ratio:" | awk '{print $4}')" pre="$(echo "${arcSummary}" | grep "Data Prefetch Efficiency:" | awk '{print $4}')" hitMRU="$(echo "${arcSummary}" | grep "Most Recently Used:" | awk '{print $4}')" hitMFU="$(echo "${arcSummary}" | grep "Most Frequently Used:" | awk '{print $4}')" hitMRUG="$(echo "${arcSummary}" | grep "Most Recently Used Ghost:" | awk '{print $5}')" hitMFUG="$(echo "${arcSummary}" | grep "Most Frequently Used Ghost:" | awk '{print $5}')" echo "
  • Installed \ Available RAM - > ${ram} / ${ramava}" echo "
  • ARC Size -> ${arc} (MRU: ${sizeMRU}, MFU: ${sizeMFU})" echo "
  • Hit ratio -> ${hit} (higher is better)" echo "
  • Prefetch -> ${pre} (higher is better)" echo "
  • Hit MFU:MRU -> ${hitMFU}:${hitMRU} (higher ratio is better)" echo "
  • Hit MRU Ghost -> ${hitMRUG} (lower is better)" echo "
  • Hit MFU Ghost -> ${hitMFUG} (lower is better)" echo "
" echo ""
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
That would explain the weird numbers we have in the stats of 3 or 4 members. But I can't do anything unfortunately.
 

rwhitlock

Dabbler
Joined
Aug 17, 2015
Messages
13
So I was poking around arc_summary.py. I believe this is the cause of those anomalies.


Code:
    output['arc_size_break'] = {}
    if arc_size > target_size:
        mfu_size = (arc_size - mru_size)
        output['arc_size_break']['recently_used_cache_size'] = {
            'per': fPerc(mru_size, arc_size),
            'num': fBytes(mru_size),
        }
        output['arc_size_break']['frequently_used_cache_size'] = {
            'per': fPerc(mfu_size, arc_size),
            'num': fBytes(mfu_size),
        }

    elif arc_size < target_size:
        mfu_size = (target_size - mru_size)
        output['arc_size_break']['recently_used_cache_size'] = {
            'per': fPerc(mru_size, target_size),
            'num': fBytes(mru_size),
        }
        output['arc_size_break']['frequently_used_cache_size'] = {
            'per': fPerc(mfu_size, target_size),
            'num': fBytes(mfu_size),
        }
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Why? just why?

Well, I can always copy the python script and modify it but I don't think it's worthwhile. And if the dev has done this then it must have a good reason to have done this, so it's maybe not a good idea to modify it without the opinion of the original dev.
 

rwhitlock

Dabbler
Joined
Aug 17, 2015
Messages
13
I agree, we should not start changing stuff all willy-nilly. Well I still fixed it. “top” has the correct MFU and MRU values. I edited the script again to reflect the changes.


Code:
#!/bin/sh

arcSummary="$(python /usr/local/www/freenasUI/tools/arc_summary.py)"
stats="$(top)"
pools="$(zpool list -H -o name)"

echo ""
echo "
  • " echo "
  • $(uname -r)" echo "
  • Put your data type(s) here..." echo "
  • $(uptime)" for pool in $pools do used="$(zpool list -H -o allocated ${pool})" total="$(zpool list -H -o size ${pool})" echo "
  • (${pool}) ${used}iB / ${total}iB" done arc="$(echo "${arcSummary}" | grep -m 1 "ARC Size:" | awk '{print $4 $5}')" sizeMRU="$(echo "${stats}" | grep "MRU"| awk '{print $6}')" sizeMFU="$(echo "${stats}" | grep "MFU" | awk '{print $4}')" ram="$(echo "${arcSummary}" | grep "Real Installed:" | awk '{print $3 $4}')" ramava="$(echo "${arcSummary}" | grep "Real Available:" | awk '{print $3}')" hit="$(echo "${arcSummary}" | grep "Cache Hit Ratio:" | awk '{print $4}')" pre="$(echo "${arcSummary}" | grep "Data Prefetch Efficiency:" | awk '{print $4}')" hitMRU="$(echo "${arcSummary}" | grep "Most Recently Used:" | awk '{print $4}')" hitMFU="$(echo "${arcSummary}" | grep "Most Frequently Used:" | awk '{print $4}')" hitMRUG="$(echo "${arcSummary}" | grep "Most Recently Used Ghost:" | awk '{print $5}')" hitMFUG="$(echo "${arcSummary}" | grep "Most Frequently Used Ghost:" | awk '{print $5}')" echo "
  • Installed \ Available RAM - > ${ram} / ${ramava}" echo "
  • ARC Size -> ${arc} (MRU: ${sizeMRU}, MFU: ${sizeMFU})" echo "
  • Hit ratio -> ${hit} (higher is better)" echo "
  • Prefetch -> ${pre} (higher is better)" echo "
  • Hit MFU:MRU -> ${hitMFU}:${hitMRU} (higher ratio is better)" echo "
  • Hit MRU Ghost -> ${hitMRUG} (lower is better)" echo "
  • Hit MFU Ghost -> ${hitMFUG} (lower is better)" echo "
" echo ""
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Are you sure top always has the good values?
 

rwhitlock

Dabbler
Joined
Aug 17, 2015
Messages
13
I am fairly certain. I have not looked at the code in the top file, but I tried on a number of test FreeNAS VM’s. The numbers actually make sense and add up ARC total = MFU + MRU + Anon + Header + Other.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ok, then I'll add the changes to the script when I can, thanks ;)
 

The Gecko

Dabbler
Joined
Sep 16, 2013
Messages
18
  • 9.3-RELEASE-p16
  • Media Type: DVD & Blu-Ray ISOs, VMs (Fibre Channel), iSCSI LUNs for workstation backup
  • 6:37PM up 93 days, 2:54, 1 user, load averages: 0.09, 0.09, 0.07
  • (MainChassis) 20.8TiB / 43.5TiB
  • (freenas-boot) 3.08GiB / 14.9GiB
  • Installed \ Available RAM - > 192.00GiB / 99.98%
  • ARC Size -> 138.61GiB (MRU: 65G, MFU: 71G)
  • Hit ratio -> 88.24% (higher is better)
  • Prefetch -> 53.47% (higher is better)
  • Hit MFU:MRU -> 75.86%:21.55% (higher ratio is better)
  • Hit MRU Ghost -> 0.20% (lower is better)
  • Hit MFU Ghost -> 0.60% (lower is better)
 

rwhitlock

Dabbler
Joined
Aug 17, 2015
Messages
13
Damn! The available RAM percentage is not making a whole lot of sense.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
106
[*]12:30AM up 2:23, 1 user, load averages: 8.69, 8.94, 8.98
[*]521MiB / 15.9GiB (freenas-boot)
[*]342GiB / 21.8TiB (tank)
[*]15.42GiB (MRU: 14.93GiB, MFU: 502.09MiB) / 24.00GiB
[*]Hit ratio -> 92.40% (higher is better)
[*]Prefetch -> 0.50% (higher is better)
[*]Hit MFU:MRU -> 86.90%:12.27% (higher ratio is better)
[*]Hit MRU Ghost -> 0.00% (lower is better)
[*]Hit MFU Ghost -> 1.98% (lower is better)
 
Joined
Apr 9, 2015
Messages
1,258
Data type: Mainly video's along with some pictures a few documents, jails, most files served via CIFS or Plex. FreeNAS 10 VM as well. System is mainly a testbed at this point.
1:45AM up 5:24, 1 user, load averages: 0.18, 0.09, 0.08
378GiB / 526GiB (tank)
1.49GiB / 14.5GiB (freenas-boot)
1.41GiB (MRU: 22.76GiB, MFU: 22.76GiB) / 48.00GiB
Hit ratio -> 96.38% (higher is better)
Prefetch -> 76.64% (higher is better)
Hit MFU:MRU -> 87.97%:10.26% (higher ratio is better)
Hit MRU Ghost -> 0.00% (lower is better)
Hit MFU Ghost -> 0.00% (lower is better)



Edit:

After a little more uptime


2:10AM up 1 day, 5:49, 1 user, load averages: 1.85, 0.98, 0.81
380GiB / 526GiB (tank)
1.49GiB / 14.5GiB (freenas-boot)
9.47GiB (MRU: 22.76GiB, MFU: 22.76GiB) / 48.00GiB
Hit ratio -> 85.58% (higher is better)
Prefetch -> 44.44% (higher is better)
Hit MFU:MRU -> 71.50%:20.27% (higher ratio is better)
Hit MRU Ghost -> 0.00% (lower is better)
Hit MFU Ghost -> 0.00% (lower is better)
 
Last edited:

xcom

Contributor
Joined
Mar 14, 2014
Messages
125
Ok here is my stats as per a user request:

[*] Docs/VM's/Media/"Cloud" Shit
[*] 6:37PM up 21:25, 2 users, load averages: 2.81, 2.84, 2.98
[*]6.35TiB / 10.9TiB (cloud)
[*]55.4MiB / 72.5GiB (ssd.eng-cache)
[*]19.99GiB (MRU: 17.29GiB, MFU: 2.70GiB) / 32.00GiB
[*]Hit ratio -> 99.22% (higher is better)
[*]Prefetch -> 33.34% (higher is better)
[*]Hit MFU:MRU -> 94.59%:2.87% (higher ratio is better)
[*]Hit MRU Ghost -> 0.06% (lower is better)
[*]Hit MFU Ghost -> 0.06% (lower is better)

When I ran this script the system was getting (and still is) pounded by:

One VM install (Debian)
One NFS transfer
3 Macs doing time machine backups
One VM doing updates and installs of ovirt
Migration of owncloud data

This system also hosts 6 VM's via Virtual Box and one is an oVirt Manager.
4 of them where running and two where shutdown.
I run 12 Jails and they where all running performing various task from plex to custom jails running my own apps.
 
Top