At what point does L2ARC make sense?

Status
Not open for further replies.

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
The default advice here is "max out RAM, then add an L2ARC if needed." When I built my recent test machine that's exactly what I did, so no L2ARC was added even though I've got 72 gigs of ECC RAM.

But now I'm looking around and seeing machines with 64G of RAM and significant L2ARCs, and I'm wondering if I made a mistake. My server will support 288 GB RAM, which I will never do due to cost (plus that warning to not use > 128 gigs of RAM with ZFS until a bug is fixed), so the question becomes this one:

At what point is L2ARC a rational choice? Are there some rules of thumb like "with 64G of RAM you can support up to 120G of L2ARC without problem," or something?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not aware of any bug that is involved with 128GB of RAM. If you have a link I'd love to read about it. I'm wondering if its no longer applicable or something. I'm pretty sure we've had 2 or 3 people here with 256GB of RAM...

"rational choice" is open for debate. In my opinion, it goes like this (justifications included).

  • Your ARC is your primary pool performance booster. That is entirely in RAM. So the more the merrier, obviously.
  • L2ARC is an extension of your ARC, but there's a cost(after all, nothing is free in this world, right?). The L2ARC index must be stored in the ARC. It uses 380bytes per entry. So take the quick thumbrule that your L2ARC shouldn't exceed 5x your ARC as the most liberal. Generally most thumbrules are 3x to 8x. So we'll stay somewhat conservative with 5x as you don't want to make this a bad choice, right? So in theory if you have 32GB of RAM you'll probably have something like a 25-28GB ARC best case. You don't want to fill your ARC with index entries(and you can't, but anyway...) so you don't want a 60GB L2ARC because that will take up about 12GB of your ARC(which is 1/2 of your total ARC!). So you want enough RAM to keep the ARC a reasonable size minus the index entries but not starved for RAM either. After all an ARC that has a horrible hit rate isn't going to do you any good.
  • Some morons(oops.. did I say that?) think that if you are maxed out at 8GB of RAM the solution is to use the L2ARC to "extend" your ARC. It doesn't work that way, and it won't work that way. And we've had a few people that had pool issues just trying to get rid of the L2ARC after they've stupidly^C^C^C^C^C^C accidentally added it. Just don't do this. It's stupid, it can be dangerous for your pool for some reason(exact cause unknown and I don't care because if you aren't asking yourself questions before throwing more hardware at a problem you deserve what's coming down the pipe... gotta hold people responsible for their actions someday), and it won't make anything faster(in fact, it often makes it slower because you starve your ARC even more).
  • Some morons(oops.. I said that again) think that if you buy something like a 512GB SSD and use it as an L2ARC with 64GB of RAM that it'll "do the right thing". There's tunables, hard coded limits, and other things that prevent you from using 100% of your ARC for the L2ARC index(thank God). But, you also need to "do the right thing" by knowing how all this stuff works.
So as a general rule for helping people get started, I consider 64GB of RAM to be a good place "to start" using an L2ARC. I'd feel fairly comfortable that you could probably get by with 120GB of L2ARC with 64GB of RAM. If you bump your system up to 128GB of RAM, you can increase your L2ARC by quite a bit, perhaps as high as 350-400GB. Once you get past that 32GB of ARC you enter the territory where you can devote up to 100% of added RAM to an L2ARC by using a bigger SSD. And since you get a 5:1 return per GB of RAM added, the L2ARC can cause an massive explosion in pool I/O output.

The key is to validate you are making good choices for your workload. Stuff doesn't end up in the L2ARC until its been requested 5x(I think). So if you stream movies at home, you are wasting your money on an L2ARC unless you plan to watch the same movie 10x in a row or something. And even then, unless your pool is so incredibly I/O intensive that the pool couldn't have handled the workload you wasted money on that SSD. Remember, the whole point of the L2ARC is to take enough of the I/O load from the pool itself to allow it to do the important things like writing new data to the pool.

The key is to determine what size L2ARC will provide you with the biggest gain in I/O throughput from the L2ARC and not the pool, then try to figure out how much RAM you need to make that happen.

Here's a cheatsheet for L2ARC sizes I'd recommend for various RAM sizes. You can obviously use more RAM than the L2ARC size if you want the ARC to be able to provide a larger performance boost. These may or may not work for your loading as the ARC has a major impact on pool performance, so you may be better off with something slightly smaller than what I mention here.

64GB of RAM = 120GB L2ARC max
96GB of RAM = 240GB L2ARC max
128GB of RAM = 350GB L2ARC max
196GB of RAM = 750GB L2ARC max

Keep in mind one thing. As your L2ARC is utilized your CPU loading will go up as the CPU deals with the indexing. Usually this doesn't pose a problem, but don't buy the lowest end Xeon and 256GB of RAM and hope for amazing pool peformance.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Here's what I saw that made me think > 128G was a bad thing:
(9/12/2013) There are presently issues related to memory handling and the ARC that have me strongly suggesting you physically limit RAM in any ZFS-based SAN to 128 GB. Go to > 128 GB at your own peril (it might work fine for you, or might cause you some serious headaches). Once resolved, I will remove this note.
From here, scroll down to section 15.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Thanks for all that. It looks like I could probably add a 128G L2ARC without any real problem, assuming the usage of my server (which is currently being testing and isn't the primary storage for my cluster) indicates it could use more cache.

That's really good to know. :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok, if I'm not mistaken, his blog is for Nexenta stuff. One of the biggest problems newbies with ZFS have is figuring out what does and doesn't apply to FreeNAS/FreeBSD. Some(but not all) Solaris stuff applies. Some(but not all Nexenta stuff applies). Very little(but some) applies from the ZFS on Linux project. It's just a big mish-mash and there's no decoder ring to figure out what does and doesn't apply for FreeBSD/FreeNAS.

That being said, I don't think its a good idea to assume that FreeBSD has the same limitation at this time. I'd like to think I'd have read about it since I do so much Googling, but I'm not that involved with the ZFS devs to expect to actually have heard about it if its a real concern. I know that someone around here(won't give names to protect the innocent) has plans for a FreeNAS server with 1TB of RAM that they plan to build "just to see what happens". So I tend to think that the 128GB problem we are discussing may not apply.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thanks for backing me up. I do appreciate validation(and corrections) when someone has them.
 
Status
Not open for further replies.
Top