2x 256GB SSD L2ARC with 32GB/64GB RAM

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Wow. That doesn't help much with the need for lots of ARC in the first place to correctly identify interesting blocks (still thinking about this in terms of VM datastore), but it does potentially change the game for the amount of L2ARC you could attach to such a system.

The thing is that memory prices haven't fallen as rapidly as SSD prices, so it is very attractive to have the possibility to maybe triple(?) the L2ARC:ARC ratio. The VM filer here has 64GB RAM and a 256GB L2ARC, and that's pretty fine. If I can triple the L2ARC, that's pretty sweet, but there's also an argument that the RAM could be boosted to 128GB, and more of the ARC could be set aside for metadata, and I could punch L2ARC up past a terabyte which would cover our working set very thoroughly, for relatively modest cost (compared to going to 256GB RAM and 1TB+ L2ARC). The current cost for 128GB RAM is still around $1200...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I just want to add something. While I am choosing to not provide my source (I don't know if he would be upset or not, but I have no doubt if you google his name and ZFS you'll find lots of stuff showing he's a respectable authority on ZFS), the typical "sweetspot" for ZFS running VMs is 96GB to 128GB of RAM and 2 to 4 128GB L2ARCs (more so you can have more throughput from multiple drives). The gains from going above/below this are less noticeable than being in that range. I've also tended to see the same kind of behavior. Systems in that zone tend to be powerhouses, while those above and below are less so.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I have no idea how to come up with such a generic statement. Performance of ZFS in a VM environment is intimately tied to the size of the working set, and I can absolutely trash the ARC on a system with 128GB without that much effort. I do actually have like 75 seats of Windows 10 sitting around that I've done some ... fun ... stuff with.

In our environment, I've experimented with stuff ranging from 64-128GB. Our environment is very favorable to ZFS; VM's do not do trite writes (including but not limited to atime writes) and tends to be read-heavy. 64GB is very usable but the working set is probably north of a terabyte, so with current sizing, a 256GB L2ARC only gives ~350GB ARC+L2ARC. I've been holding off permanently bumping up to 128GB, because without adding more L2ARC, it wouldn't make a huge difference. The problem tends to be that lots of activity (development work, etc) processed by the filer tends to displace a lot of the normal resident working set, which isn't a big deal normally since the pool isn't terribly busy, but that same work tends to be stressing out the pool so you hit the double penalty of losing cached working set AND suddenly having a pool whose responsiveness has degraded. At $1200 per 128GB of RAM, I'm not really that interested in bumping the box up to 256GB of RAM to gain sufficient ARC to justify 1T+ of L2ARC. Bumping up to 128GB RAM and adding another pair of 256GB SSD's is probably justifiable, but the possibility of going with a better ratio of L2ARC:ARC is very attractive because suddenly it might be feasible to hit the 1TB of L2ARC mark with 128GB of RAM.
 

LubomirZ

Dabbler
Joined
Sep 9, 2015
Messages
12
and this is exactly what I didn't know and why I was asking. I didn't know how much L2ARC I can "afford" with 64GB RAM.
What a pitty it's not 512GB :) [yet]
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
So, this isn't a typical workload, but I've run a machine with 96 GB of RAM and 1 TB of L2ARC without any trouble at all. Hit ratios were great, but it wasn't a virtual machine server (just NFS @ bonded 10G). Pool size was just over 500 TB (no, that's not a typo).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Depending on what your actual use model is, yes, it is perfectly possible that a larger ratio of ARC:L2ARC is possible - basically when you've got large (64K, 128K) block sizes this is often the case.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
Oh, I run 1 MB block sizes on pretty much everything other than special case machines.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I had even forgotten about large_blocks because our primary use model is VM storage. :smile:
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
That's something I've never done with ZFS - typically that's the job of an expensive filer from Netapp. :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Missing the "cr" in the middle there. ZFS is more attractive because the solution is so much cheaper.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
Oh, I agree. I just haven't been asked to build a cheap VM machine yet. :)
 
Status
Not open for further replies.
Top