What I don't get: One the one hand you say more RAM is better, but on other hand that NVMe is useless in 1G networks.
This is simple math. Modern SATA or SAS3 SSD's are capable of 6Gbps. 6Gbps is larger (by quite a bit) than 1Gbps. Even if we worry about the fact that a SATA SSD might not reliably hit SIX Gbps but maybe only three or four if it is an older one, this is sufficient to fill the 1Gbps.
When you are limited to at best 125 MB/s, how does a big ARC help?
The problem you're dealing with is the ability of ZFS to gather MRU/MFU statistics to make an intelligent eviction decision.
Let's pretend you have an ARC large enough for four ZFS blocks. This is for illustration only, obviously. We're going to look at MFU here, which is Most Frequently Used.
Client reads a file "/pool/file1". ZFS opens "/pool/" (a directory) and reads the contents. Since the directory is less than 1MB, it fits in one ARC entry. The ARC MFU for this block is 1. ZFS then opens "file1" and reads that. The file fits in 1MB, so this is the second ARC entry. MFU count is also 1.
Now let's say that there's a file "/pool/file2", client reads it. MFU count for the directory goes to 2, MFU for file2 is 1.
Now repeat for "/pool/file3". MFU count for the directory goes to 3, MFU for file3 is 1.
So we have ARC with MFU stats of 3, 1, 1, 1.
Because one of these files is read a bunch, "file2" gets read another ten times. So now we have ARC with MFU stats of 3, 1, 11, 1.
There is very little point in evicting a block with only 1 access to L2ARC; 1 access is not a predictor that the block is popular, just that it was accessed. 2 might be meaningful. But we have no 2's. We do have a 3 but it is metadata and is therefore maybe not a good choice for eviction. We desperately want to keep the file2 ARC block around in ARC because it has 11 accesses.
What you're looking for when you look for L2ARC evictions is to pick the blocks with MFU access greater than 1 but still relatively low. These blocks can be well served by L2ARC. You prefer to have the very frequently accessed blocks served out of ARC directly. This should make sense.
The problem that you typically run into is that the access patterns on a hobbyist NAS tends not to favor repeated access to the same files over and over again, which means that lots of MFU counts remain at 1. This makes it hard for the NAS to differentiate what blocks would be most useful to evict to L2ARC. You need to hold the blocks in ARC longer until hopefully some of them show up as MFU 2 or 3 or whatever, so that you can evict those blocks to L2ARC knowing that they do get used periodically. Otherwise what ends up happening is you evict effectively random crap out to L2ARC which just causes thrashing and burns through your SSD endurance.
Therefore the larger ARC is often helpful because it allows the ARC to make better eviction choices. Does that always happen? No. Just like everything with ZFS, you really need to look at your workloads and your stats. But we are often confronted here in the forums with guiding users towards sane choices, and in general, sight unseen and after a decade of helping users and asking them for their stats, 64GB seems like a much more reasonable basement value for ARC than 32GB if you are going to use L2ARC.
If the numbers by @NickF are correct, and 128GB of L2ARC need about 1GB of RAM, why not 512GB L2ARC, needing 5GB of 32GB.
I don't know what numbers you're referring to. If it makes you more comfortable, let me repeat again that this is general guidance for general hobbyist-oriented use cases. If you really want to dial in on what you need, you MUST run the stats and do the math with an understanding of how the size of the ARC interacts with your workload. Your ARC basically "cannot" be "too big", that is always a good situation to be in, but your ARC can definitely be "too small" for the reasons outlined above. There is not actually a fixed ratio of these things because ZFS uses a variable block size and your configuration and workload are integral to the mechanics of the mechanism. I can tell you that 1TB of L2ARC with 70 byte overhead for the L2ARC ptr works out to.... 1000000ish records or 70,000,000 bytes of ARC if the blocksize is 1MB, but on the other hand with an 8KB blocksize it's going to be a lot more. Not sure I did the math right. This is what we mean by "workload dependent".
The TrueNAS guide recommends no less than 32GB of RAM and no more than 5x-10x the amount of ARC as L2ARC. These are sensible "will probably work fine" numbers. We typically recommend no less than 64GB here in the forums because it gives you more intelligent eviction choices.