The question is if the working set of your team will be way larger than the 256G you already have as primary cache. Plus, the data needs to be copied to and from the L2ARC. For everything that is in RAM the L2ARC is of absolutely no use. Once RAM is full, the L2ARC will start to fill up by read operations. So the only way to warm up the L2ARC is by reading data.
There is a persistence option now - it's off by default, I don't know if it is supposed to work just now. It is quite new, too - just like special vdevs.
L2ARC really depends on the question if the same blocks will be frequently read multiple times.
I do understand what you're saying Patrick, the editors tend to work on large files (which stay on the Working File Server) and open them in the video editing application. Each project can vary quite widely though depending on the scale and scope and the codecs involved. But for the sake of estimating one can take a range of 50GB to 200GB+ per project and multiple projects are worked on simultaneously (within the week you can say), for 2-3 editors.
I agree with the special vdev recommendation, totally makes sense for the archived AV media dataset.
What I'm exploring is to create a large enough SSD based cache that can accelerate whatever they work on without having to have a separate zpool which is totally SSD based. This would allow me to have a much larger dataset on disk (70-100TB) with a 10-14 TB L2ARC, than having to spend the budget on about 20-30 TB of SSD based pool.
Would things like pre-fetching help?