L2ARC Question - What is populated in the L2ARC?

Status
Not open for further replies.

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
So I plan to setup an L2ARC of about 100GB and I plan to do some testing to see how beneficial it will be to a home user. I'm not expecting much of a benefit but I was thinking about something this morning...

I run the MiniDLNA plugin. If this service scans the entire database upon reboot or some other event, will this cause the L2ARC to be filled with that data vice any other data that may be requested frequently?

So what I'm asking is will the internal program to FreeNAS be the trigger as to what is copied to the L2ARC or is it external LAN requests that populate the L2ARC?

I will of course test this out but it could be a few weeks (maybe after Christmas) before I am ready to install the SSD and test this out.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS doesn't know or care where the requests for data come from. Many of the NAS protocols (CIFS, AFP, etc) are implemented as userland programs and so appear to be virtually identical to MiniDLNA, which is itself a network server, so ... what's the difference again?

First, understand that L2ARC is filled from data that is likely to be evicted from ARC soon; there is no value to moving fresher stuff to slower storage, since if it's fresher, it's more likely to be accessed again sooner. This means that you need a reasonably-sized ARC, because the only way for ZFS to accurately build up a good ARC is to be seeing a fair amount of stuff cached to begin with. If you have very bursty traffic that causes massive rounds of ARC evictions, you're going to see less-good selections picked for L2ARC because the stuff flushed out to L2ARC is basically a few gallons of water out of a firehose flow of data. L2ARC is not a good substitute for a decently sized ARC.

L2ARC is populated based on several controls. The ones that you can reasonably affect are

vfs.zfs.l2arc_write_boost: 134217728
vfs.zfs.l2arc_write_max: 67108864

These settings both default to, I believe, 8MB, which is 8MB per feed period, which is 1sec on FreeNAS. write_max controls how much data per second can be flushed out to your L2ARC device. write_boost controls how much is flushed out during the period before ARC is full; this is essentially a time where nothing would be reading from L2ARC so you can go a bit heavier on writes. Big thing to remember with these tunables, though, is that you can't just say "oh my SSD can handle 200MB/sec so I'll set them to 200MB/sec!" because then your SSD won't be able to service read requests in a reasonable fashion. You'll see that I've picked 64MB/sec for an OCZ Agility 3 60GB; this is about 1/8th its potential write speed.

So anyways, basically what ends up happening is that ZFS picks the older regions of its ARC and flushes that out to L2ARC at speeds of no more than l2arc_write_max. You don't want to get too aggressive, and you should be aware that it is not designed to instantly cache every possible bit of data that it'd be nice to have in L2ARC. The idea is that after things have been running awhile, frequently requested stuff ends up in ARC, less common stuff in L2ARC, and everything else is pulled from disk.

One minor correction: if you have l2arc_feed_again set to 1, it is possible for the l2arc flush to exceed the rate I described above; l2arc_feed_secs is the upper cap and defaults to 1s, but there is also l2arc_feed_min_ms which defaults to 200, and it is therefore possible to have several "feed_again" events happen quickly back-to-back. Do not set write_max too aggressively high unless you understand the dynamics here. The code is reasonably clever and will self-manage this assuming you give it reasonable guidance. For workloads here I determined that 1/8th of theoretical write capacity, even accelerated through the feed_again process, would still not starve read attempts. 1/8th of theoretical write capacity is probably as aggressive as one should ever get.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thanks for the info. Since my posting I have done some testing on L2ARC and ZIL and found out they didn't contribute much to a home system, at least my system and for streaming media. Performance did improve using the NAS test suite. I honestly didn't expect to see any major improvement but I have to satisfy my curiosity. All my testing was over CIFS so now I'm working on trying to get my Windows 7 Pro to have NFS capability (damn Microsoft didn't include NFS in Pro version) for the cost of Free. I have tried some shady software but performance was horrible and I understand that is not typical of the built in NFS for the Ultimate version.

I may need to look at the adjustable values you mentioned above as I didn't adjust a single thing, I took things at stock values.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
On L2ARC, I think the important thing to remember is that it's slower than ARC memory and it's better to max out ARC before considering L2ARC.

Have you looked at http://cuddletech.com/arc_summary.html ? The "Most Recently Used Ghosts" entries show what might have gone to L2ARC, given your current RAM.


Have fun with NFS. You'll be the hero if you definitively figure out the "best practice" settings with FreeNAS! :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Leaving things at the defaults is not bad, it just means that the L2ARC might take longer to warm up, especially if you have a bursty sort of utilization pattern on your pool.

A key bit is to understand that L2ARC requires some time to warm up anyways, that is, for actually-useful data to wind up there. The intent of L2ARC is really to accelerate reads of large working sets on very busy fileservers, where contention for the spindles would mean that reducing requests from the spindles results in increased aggregate performance. If you are in a situation where your fileserver is essentially idle and then you make some small number of sequential requests, you _might_ see some improvement from L2ARC too, but will you notice? Maybe, maybe not.

I've got FreeNAS running on a Xeon E3 inside ESXi and had been experimenting to find how much RAM I wanted/needed to dedicate for various workloads. We have a 4x3TB RAIDZ2 box that's used for general docs and data archival. Basically it seems like L2ARC isn't too much of a benefit because even with only 8GB RAM dedicated to the VM, the ARC efficiency is 99.48% and it's mostly metadata being cached anyways.

Code:
System Memory:

        3.06%   242.13  MiB Active,     15.65%  1.21    GiB Inact
        62.87%  4.85    GiB Wired,      0.04%   3.55    MiB Cache
        18.37%  1.42    GiB Free,       0.01%   564.00  KiB Gap

        Real Installed:                         8.00    GiB
        Real Available:                 99.81%  7.99    GiB
        Real Managed:                   96.69%  7.72    GiB

        Logical Total:                          8.00    GiB
        Logical Used:                   67.13%  5.37    GiB
        Logical Free:                   32.87%  2.63    GiB

Kernel Memory:                                  3.81    GiB
        Data:                           99.59%  3.80    GiB
        Text:                           0.41%   15.85   MiB

Kernel Memory Map:                              4.39    GiB
        Size:                           76.60%  3.36    GiB
        Free:                           23.40%  1.03    GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   28
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                24.99m
        Recycle Misses:                         1.37m
        Mutex Misses:                           37.03k
        Evict Skips:                            37.03k

ARC Size:                               91.01%  3.76    GiB
        Target Size: (Adaptive)         91.02%  3.76    GiB
        Min Size (Hard Limit):          12.50%  528.41  MiB
        Max Size (High Water):          8:1     4.13    GiB

ARC Size Breakdown:
        Recently Used Cache Size:       22.93%  882.30  MiB
        Frequently Used Cache Size:     77.07%  2.90    GiB

ARC Hash Breakdown:
        Elements Max:                           718.42k
        Elements Current:               93.22%  669.72k
        Collisions:                             62.86m
        Chain Max:                              25
        Chains:                                 124.76k
                                                                Page:  2
------------------------------------------------------------------------

ARC Efficiency:                                 3.32b
        Cache Hit Ratio:                99.48%  3.30b
        Cache Miss Ratio:               0.52%   17.29m
        Actual Hit Ratio:               77.32%  2.57b

        Data Demand Efficiency:         99.75%  83.04m
        Data Prefetch Efficiency:       67.36%  31.05m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             22.06%  728.29m
          Most Recently Used:           1.05%   34.53m
          Most Frequently Used:         76.67%  2.53b
          Most Recently Used Ghost:     0.04%   1.20m
          Most Frequently Used Ghost:   0.18%   6.01m

        CACHE HITS BY DATA TYPE:
          Demand Data:                  2.51%   82.83m
          Prefetch Data:                0.63%   20.92m
          Demand Metadata:              73.91%  2.44b
          Prefetch Metadata:            22.95%  757.63m

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  1.21%   208.65k
          Prefetch Data:                58.61%  10.13m
          Demand Metadata:              32.83%  5.68m
          Prefetch Metadata:            7.35%   1.27m
                                                                Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        25.46m
        Tried Lock Failures:                    4.05m
        IO In Progress:                         29.33k
        Low Memory Aborts:                      34
        Free on Write:                          36.69k
        Writes While Full:                      2.56k
        R/W Clashes:                            32
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           343.50k

L2 ARC Size: (Adaptive)                         50.52   GiB
        Header Size:                    0.17%   90.32   MiB

L2 ARC Evicts:
        Lock Retries:                           140
        Upon Reading:                           645

L2 ARC Breakdown:                               17.29m
        Hit Ratio:                      31.76%  5.49m
        Miss Ratio:                     68.24%  11.80m
        Feeds:                                  2.68m

L2 ARC Buffer:
        Bytes Scanned:                          6.54    PiB
        Buffer Iterations:                      2.68m
        List Iterations:                        171.32m
        NULL List Iterations:                   854.47k

L2 ARC Writes:
        Writes Sent:                    100.00% 820.47k
                                                                Page:  4
------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 4.03b
        Hit Ratio:                      69.61%  2.81b
        Miss Ratio:                     30.39%  1.23b

        Colinear:                               1.23b
          Hit Ratio:                    0.01%   136.30k
          Miss Ratio:                   99.99%  1.23b

        Stride:                                 2.67b
          Hit Ratio:                    100.00% 2.67b
          Miss Ratio:                   0.00%   62.93k

DMU Misc:
        Reclaim:                                1.23b
          Successes:                    0.21%   2.53m
          Failures:                     99.79%  1.22b

        Streams:                                138.17m
          +Resets:                      0.01%   19.32k
          -Resets:                      99.99%  138.15m
          Bogus:                                0
                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6


So anyways at first glance it might seem like the L2ARC is actually doing something, at 31.76% hit ratio, but if you consider that there are only 17.29m requests over the last 30 days (2.6 million seconds), that's only a handful of requests per second, though they're admittedly likely to be heavily clustered. Still, the ARC itself is sufficient to handle a vast majority of what is going on, and the pool is idle probably more than 90% of the time, so I would have to say that it isn't making a significant or noticeable difference.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@jgreco,
What command do I type to get the report you generated? Also I really don't think the L2ARC will ever generate a real benefit for me but I will hang on to it until I can do some NFS testing. Wish I had a legal copy of Windows Ultimate but I'm not paying just to test out NFS. I guess I could install Ultimate on a separate SSD and not activate it just to see NFS and test it out.

@bollar,
Unfortunately my max RAM is 8GB for this board, wish I could double it and cross my fingers it would work but the risk is not worth it. I could loose my data at any time and the only heartache would be copying back the critical data from a second backup device. It's not much, only a few DVD's worth of data, everything else would be missed but movies are not worth worrying about and I could rip those again if I really felt like it, but I doubt I would. Overall I'd rather not loose my data until the gremlins say it time to trash my data.

Thanks guys for your inputs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh sorry, yeah my bad.

# /usr/local/www/freenasUI/tools/arc_summary.py | less

Tells you LOTS of useful information but despite looking like it is "friendly", it is still kind of frustrating to read unless you know what it is saying. Some of the stuff is more obvious than others. For example, "ARC Efficiency"/"Cache Hit Ratio" of 99.48% is obvious; what isn't obvious is that the 3.30b after it represents 3.3 billion total hits. The percentages are probably easier reading, of course, but you can glean more information from what's implied by the raw numbers. For example in my previous note I pointed out that the L2ARC wasn't getting heavily hit, based in part on the size of those numbers. One could definitely make good use of those by taking before and after readings of the kstat variables that they're based on. The frustrating thing about ZFS is that it is kind of like reading tea leaves sometimes. Even with all this reporting, when I have real performance issues, I've ended up needing to dig into the source to understand what a lot of it meant and implied.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Got it. So my ARC is only 3.6 GB and has a 72% hit ratio and my L2ARC is 98 GB and has a 1% hit ratio. I can't say I'm suprised due to the usage of my system (system backups and streaming video). Now I want to try larger RAM modules in my machine, time to dig up some 4GB RAM modules, if I can find some.

Code:
System Memory:

        2.10%   166.29  MiB Active,     35.37%  2.74    GiB Inact
        56.73%  4.39    GiB Wired,      4.13%   327.63  MiB Cache
        1.65%   130.44  MiB Free,       0.01%   1.12    MiB Gap

        Real Installed:                         8.00    GiB
        Real Available:                 99.80%  7.98    GiB
        Real Managed:                   96.92%  7.74    GiB

        Logical Total:                          8.00    GiB
        Logical Used:                   60.19%  4.82    GiB
        Logical Free:                   39.81%  3.18    GiB

Kernel Memory:                                  3.60    GiB
        Data:                           99.57%  3.59    GiB
        Text:                           0.43%   15.80   MiB

Kernel Memory Map:                              5.14    GiB
        Size:                           63.45%  3.26    GiB
        Free:                           36.55%  1.88    GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (THROTTLED)
        Storage pool Version:                   28
        Filesystem Version:                     5
        Memory Throttle Count:                  7

ARC Misc:
        Deleted:                                32.25m
        Recycle Misses:                         726.90k
        Mutex Misses:                           5.60k
        Evict Skips:                            5.60k

ARC Size:                               53.45%  3.60    GiB
        Target Size: (Adaptive)         53.45%  3.60    GiB
        Min Size (Hard Limit):          12.50%  862.46  MiB
        Max Size (High Water):          8:1     6.74    GiB

ARC Size Breakdown:
        Recently Used Cache Size:       85.57%  3.08    GiB
        Frequently Used Cache Size:     14.43%  532.30  MiB

ARC Hash Breakdown:
        Elements Max:                           996.48k
        Elements Current:               94.99%  946.52k
        Collisions:                             28.10m
        Chain Max:                              31
        Chains:                                 130.13k
                                                                Page:  2
------------------------------------------------------------------------

ARC Efficiency:                                 188.00m
        Cache Hit Ratio:                86.95%  163.47m
        Cache Miss Ratio:               13.05%  24.53m
        Actual Hit Ratio:               72.71%  136.70m

        Data Demand Efficiency:         98.37%  87.57m
        Data Prefetch Efficiency:       57.85%  54.25m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             15.51%  25.35m
          Most Recently Used:           17.72%  28.96m
          Most Frequently Used:         65.90%  107.73m
          Most Recently Used Ghost:     0.20%   319.49k
          Most Frequently Used Ghost:   0.67%   1.10m

        CACHE HITS BY DATA TYPE:
          Demand Data:                  52.69%  86.14m
          Prefetch Data:                19.20%  31.39m
          Demand Metadata:              28.02%  45.80m
          Prefetch Metadata:            0.09%   139.97k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  5.83%   1.43m
          Prefetch Data:                93.21%  22.87m
          Demand Metadata:              0.88%   214.78k
          Prefetch Metadata:            0.08%   20.18k
                                                                Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        49.53m
        Tried Lock Failures:                    193.97k
        IO In Progress:                         44.56k
        Low Memory Aborts:                      81
        Free on Write:                          690.17k
        Writes While Full:                      214.57k
        R/W Clashes:                            36
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           0

L2 ARC Size: (Adaptive)                         93.21   GiB
        Header Size:                    0.17%   164.89  MiB

L2 ARC Evicts:
        Lock Retries:                           179
        Upon Reading:                           489

L2 ARC Breakdown:                               24.53m
        Hit Ratio:                      1.00%   245.96k
        Miss Ratio:                     99.00%  24.29m
        Feeds:                                  1.95m

L2 ARC Buffer:
        Bytes Scanned:                          1.11    PiB
        Buffer Iterations:                      1.95m
        List Iterations:                        121.26m
        NULL List Iterations:                   4.88m

L2 ARC Writes:
        Writes Sent:                    100.00% 264.35k
                                                                Page:  4
------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 221.89m
        Hit Ratio:                      84.20%  186.83m
        Miss Ratio:                     15.80%  35.06m

        Colinear:                               35.06m
          Hit Ratio:                    0.31%   107.80k
          Miss Ratio:                   99.69%  34.96m

        Stride:                                 181.19m
          Hit Ratio:                    99.71%  180.66m
          Miss Ratio:                   0.29%   534.07k

DMU Misc:
        Reclaim:                                34.96m
          Successes:                    0.74%   259.24k
          Failures:                     99.26%  34.70m

        Streams:                                6.19m
          +Resets:                      1.12%   69.59k
          -Resets:                      98.88%  6.12m
          Bogus:                                0
                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           384
        vm.kmem_size                            8308539392
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        329853485875
        vfs.zfs.l2c_only_size                   97679313408
        vfs.zfs.mfu_ghost_data_lsize            1418331136
        vfs.zfs.mfu_ghost_metadata_lsize        1178461184
        vfs.zfs.mfu_ghost_size                  2596792320
        vfs.zfs.mfu_data_lsize                  650330624
        vfs.zfs.mfu_metadata_lsize              297502720
        vfs.zfs.mfu_size                        994200064
        vfs.zfs.mru_ghost_data_lsize            945029120
        vfs.zfs.mru_ghost_metadata_lsize        323803136
        vfs.zfs.mru_ghost_size                  1268832256
        vfs.zfs.mru_data_lsize                  2326515712
        vfs.zfs.mru_metadata_lsize              122701312
        vfs.zfs.mru_size                        2502825472
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       16384
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  1808699392
        vfs.zfs.arc_meta_used                   890040640
        vfs.zfs.arc_min                         904349696
        vfs.zfs.arc_max                         7234797568
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.write_limit_override            0
        vfs.zfs.write_limit_inflated            25718415360
        vfs.zfs.write_limit_max                 1071600640
        vfs.zfs.write_limit_min                 33554432
        vfs.zfs.write_limit_shift               3
        vfs.zfs.no_write_throttle               0
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.mg_alloc_failures               8
        vfs.zfs.check_hostid                    1
        vfs.zfs.recover                         0
        vfs.zfs.txg.synctime_ms                 1000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.scrub_limit                     10
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.ramp_rate                  2
        vfs.zfs.vdev.time_shift                 6
        vfs.zfs.vdev.min_pending                4
        vfs.zfs.vdev.max_pending                10
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.zio.use_uma                     0
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     28
        vfs.zfs.version.acl                     1
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
                                                                Page:  7
------------------------------------------------------------------------

 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hmm, interesting.

You have a small value for memory throttle count, which from what I can tell is a sign that ZFS was at some point a little stressed for ARC. It isn't a high count but does suggest more memory might be not-pointless.

Your cache miss rate is also much greater than what I would expect to see for a "small" server. I'm assuming you're storing lots of video or music files? And then periodically indexing them ("automatic scan fix")? I'm trying to envision what's going on here. If you had more frequently accessed blocks than what the ARC could handle, then I'd expect a lot of that to be crammed out to L2ARC.

So the miss rate doesn't quite make sense with the L2ARC results.

But wait. If the bulk of your activity is automatic scanning of files and multimedia metadata, how quickly does that process run? Because if it is happening quickly, then there isn't a lot of chance for ZFS to populate the L2ARC - that's what my initial post in this thread was talking about.

Also, is the L2ARC cold? How long has it been attached to the pool? If you have a low write_max, and activity is mainly a flurry of frantic activity at the top of every hour followed by 59 minutes of idle, then it could take days for the L2ARC to warm up.

I'm pretty sure the clues to decode that are stored in the L2ARC summary, but I haven't really put the time into it to wrap my head around what that all means. In particular, I know that "low memory aborts" happen when the system would like to feed out to L2ARC but instead decides that the ARC is under a fair bit of pressure. But 81 doesn't seem like a lot of events, unless your L2ARC is cold, in which case it probably is.

Reading the source is a hell of a learning process. :smile:
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I've had the system running for 20 days right now. Most of my activity is backing up data and then the verify of that data. I do not have many music files, only a few sample files and I never stream those. I only stream a few videos a week, tops. Of that 20 days I was on vacation for 10 days so only backups are being automated. Oh, I did perform benchmark testing so I'm sure some of my hits were there. Like I said, I don't expect much from an L2ARC in my situation. If I used it for a lot of data that needed to be accessed a lot, maybe it would be a bit more useful. I think it would be great if I could create a VM that my wife and daughter could use and I could restore snapshots for when they start to complain that the computer is slowing down again. What a pain, but that is another topic all together.

Yes, the video files get indexed whenever I add or move a video file around, which isn't very often, maybe once every week or so.

I should read up on the L2ARC settings more just to see if there is something I could squeeze out of it. I just read my MB manual and it does say it supports four banks of 4GB RAM, so 16GB is supported. I must have been thinking about the original MB I was using last year.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Then my impression is that your ARC is reasonably sized for a vast majority of what you do but if you push hard, then it gets stressed, and during those times it might be nice to have more ARC. The L2ARC doesn't appear to be doing anything significant as you noticed.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Since my posting I have done some testing on L2ARC and ZIL and found out they didn't contribute much to a home system, at least my system and for streaming media.
I may need to look at the adjustable values you mentioned above as I didn't adjust a single thing, I took things at stock values.
They wouldn't for your use case with everything at the default. The L2ARC was designed for random reads of mostly static data, i.e. databases & such. Untuned, you will see little to no benefit for streaming workloads.

I.E. to use with streaming movies create the following tunable:

vfs.zfs.l2arc_noprefetch: 0

I would increase the two tunables jgreco already mentioned a bit as well. I imagine your SSD is able to handle more than 8MB per second of writes.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
After pricing RAM last night I am taking a step back. It's really nice to play around with the hardware but once the playing starts to hit your wallet it's time to be smart and say "Will I actually recoup these costs and is it really worth it?". Right now I have to say no, it's not worth it for me at this time. When I build a new server I will up the RAM, in fact it could be when I upgrade my main computer that my old computer (i7 950 w24GB RAM) becomes my NAS. I'd say that is at least 2 years down the road, maybe.

I will try changing the tunable values just to see what happens but ultimately I plan to remove the ZIL and L2ARC unless tuning it makes them valuable. I could use the SSD in other places.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's always the problem, isn't it... it's always nice to have the nicer thing until you have to actually pay for it, ahahaha.

Given your use scenario as described, I'd guess that no amount of tuning will make a noticeable difference. For L2ARC to really pay off, you probably need to have a pool that is substantially busy, and more specifically spindles that are substantially busy. If your spindles are not busy, then it isn't much of a burden and only a minor delay for them to seek on over and serve up your bits. You only gain a milliseconds-quicker response by using L2ARC. However, if your spindles are very busy, serving up a few dozen requests per second, additional load may result in substantially reduced responsiveness of the server, at which point having a much larger {,L2}ARC is very helpful because data is served from cache instead of spindles. In that case the performance gains have the potential to be absolutely massive.
 
Status
Not open for further replies.
Top