What is the fastest feed rate to L2ARC?

Jyo

Dabbler
Joined
May 12, 2021
Messages
14
Hi,

We have a video editing workflow, will have a large (10+ TB on 4 NVME Gen4 SSDs) L2ARC help? Since video editing involves re-reading the same video file multiple times, is it possible to tune the L2ARC to cache everything that is read in the first read only, so that every read after the first goes from cache and not the pool?

The NVME SSDs have a lot of bandwidth (say 4 of them).. what is the highest rate that one can set for feeding the L2ARC?

Any experiences?

Also do writes that go through ZIL/SLOG also get cached to L2ARC? So that recently written data is also accelerated by read chache?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
ZIL & SLOG are for synchronous writes only, so of limited value to most people. I don't know if writes are put into the L2ARC. Does not seem like they would, though the writes could stay in ARC, (aka RAM), for a while. Possibly moving to L2ARC if read, though again I don't know.

In general, you want to maximize your RAM first, before adding a L2ARC. This is because the index to the L2ARC has to reside in RAM. So, too little RAM and too big L2ARC is counter productive.

If you put the video files in a separate ZFS Dataset from any other misc. datasets, you can turn off secondary cache, (aka L2ARC), for every other dataset except your video dataset(s). That would more or less dedicate it to your video files.

I don't have any speed readings for L2ARC, but some speed limitations are network.
 
  • Like
Reactions: Jyo

Jyo

Dabbler
Joined
May 12, 2021
Messages
14
ZIL & SLOG are for synchronous writes only, so of limited value to most people. I don't know if writes are put into the L2ARC. Does not seem like they would, though the writes could stay in ARC, (aka RAM), for a while. Possibly moving to L2ARC if read, though again I don't know.

In general, you want to maximize your RAM first, before adding a L2ARC. This is because the index to the L2ARC has to reside in RAM. So, too little RAM and too big L2ARC is counter productive.

If you put the video files in a separate ZFS Dataset from any other misc. datasets, you can turn off secondary cache, (aka L2ARC), for every other dataset except your video dataset(s). That would more or less dedicate it to your video files.

I don't have any speed readings for L2ARC, but some speed limitations are network.

Thanks Arwen, that makes sense. Can keep the L2ARC vdev dedicated to the Working File Share.

Makes sense to consider network speed for feed rates to L2ARC, our clients are connected at 10 GbE to a switch that the Server will connect to at 40 GbE. 40 GbE is 5GB/s theoretically, so I could test it with that feed rate and see if it helps.

Am currently spec'ing the server with 16x16GB DIMMS, the next step up is to 32GB DIMMs, so 512GB total, would that be recommended for an L2ARC in 10s of TBs?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
Sorry, I don't have any recommendations on RAM or L2ARC size.

One note, you can have multiple L2ARC devices. L2ARC does not allow multiple devices to be mirrored or RAIDed, so they are striped. This is because in the case of L2ARC they are duplicates of the information in the pool. So on loss of a L2ARC, their is no data loss, just a bit of performance.
 
  • Like
Reactions: Jyo

Jyo

Dabbler
Joined
May 12, 2021
Messages
14
Thanks Arwen, yes am actually thinking that is we can have 4-8 fast SSDs then maybe we can increase the feed rate to L2ARC really aggressively, but don't have the system on hand to try it..
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
If you're reading things repeatedly, those things shouldn't be evicted from ARC, so won't get to L2ARC.

L2ARC saves you from hitting the pool for files that otherwise would have been ejected from ARC and had no other option to be accessed. After a file (or bunch of blocks) is accessed from L2ARC, it is back in ARC.

It costs you writes to your L2ARC SSDs for every new read/write in that pool (after ARC is filled), so ensure you have high write endurance drives or you'll burn them out quickly.

You could mirror L2ARC (ZFS allows for it), but that's not logical (as @Arwen said) as the data is already safe in the pool and isn't impacted if L2ARC is lost.

When sizing L2ARC, consider that the metadata required to manage the blocks in L2ARC are stored in ARC, meaning your ARC will be less useful the more L2ARC you add.

It seems to me that what you really want is storage tiering... (recently accessed data living on fast pool member disks, less recently accessed content evicted to slower pool member disks).

There has been some kind of confirmation from @Kris Moore that it's on the plans to be looked at in the future, but currently not available.

It should be clearly noted that L2ARC can be helpful, but it is most certainly not storage tiering.

I'm not sure if it's yet considered safe to do (some bugs were found and it was disabled by default), but you may also want to set your L2ARC to be persistent (not emptied on reboot), which may help in some cases.

For all of this, RAM/ARC is your best friend. Find a hardware option that allows for the largest amount of that and deploy it.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
This article might also help:
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
@sretalla - Both 0.8.6 & 2.0.4 versions of OpenZFS do not support redundant cache devices;
cache A device used to cache storage pool data. A cache device cannot be configured as a mirror or raidz
group. For more information, see the Cache Devices section.
Similar details in the description of "Cache Devices";
Cache devices cannot be mirrored or part of a raidz configuration. If a read error is encountered on a
cache device, that read I/O is reissued to the original storage pool device, which might be part of a mir-
rored or raidz configuration.
Though I don't know if later versions of OpenZFS support redundant cache devices, (aka L2ARC).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
@sretalla - Both 0.8.6 & 2.0.4 versions of OpenZFS do not support redundant cache devices;
OK, you got me... I'm apparently mixing a couple of things from a long time ago when I was bothering with L2ARC and SLOG and could swear that I had done it, but it may have been mirrored SLOG, so I suspect that's where the confusion arose.

Mea culpa.
 
Top