ZFS doesn't know or care where the requests for data come from. Many of the NAS protocols (CIFS, AFP, etc) are implemented as userland programs and so appear to be virtually identical to MiniDLNA, which is itself a network server, so ... what's the difference again?
First, understand that L2ARC is filled from data that is likely to be evicted from ARC soon; there is no value to moving fresher stuff to slower storage, since if it's fresher, it's more likely to be accessed again sooner. This means that you need a reasonably-sized ARC, because the only way for ZFS to accurately build up a good ARC is to be seeing a fair amount of stuff cached to begin with. If you have very bursty traffic that causes massive rounds of ARC evictions, you're going to see less-good selections picked for L2ARC because the stuff flushed out to L2ARC is basically a few gallons of water out of a firehose flow of data. L2ARC is not a good substitute for a decently sized ARC.
L2ARC is populated based on several controls. The ones that you can reasonably affect are
vfs.zfs.l2arc_write_boost: 134217728
vfs.zfs.l2arc_write_max: 67108864
These settings both default to, I believe, 8MB, which is 8MB per feed period, which is 1sec on FreeNAS. write_max controls how much data per second can be flushed out to your L2ARC device. write_boost controls how much is flushed out during the period before ARC is full; this is essentially a time where nothing would be reading from L2ARC so you can go a bit heavier on writes. Big thing to remember with these tunables, though, is that you can't just say "oh my SSD can handle 200MB/sec so I'll set them to 200MB/sec!" because then your SSD won't be able to service read requests in a reasonable fashion. You'll see that I've picked 64MB/sec for an OCZ Agility 3 60GB; this is about 1/8th its potential write speed.
So anyways, basically what ends up happening is that ZFS picks the older regions of its ARC and flushes that out to L2ARC at speeds of no more than l2arc_write_max. You don't want to get too aggressive, and you should be aware that it is not designed to instantly cache every possible bit of data that it'd be nice to have in L2ARC. The idea is that after things have been running awhile, frequently requested stuff ends up in ARC, less common stuff in L2ARC, and everything else is pulled from disk.
One minor correction: if you have l2arc_feed_again set to 1, it is possible for the l2arc flush to exceed the rate I described above; l2arc_feed_secs is the upper cap and defaults to 1s, but there is also l2arc_feed_min_ms which defaults to 200, and it is therefore possible to have several "feed_again" events happen quickly back-to-back. Do not set write_max too aggressively high unless you understand the dynamics here. The code is reasonably clever and will self-manage this assuming you give it reasonable guidance. For workloads here I determined that 1/8th of theoretical write capacity, even accelerated through the feed_again process, would still not starve read attempts. 1/8th of theoretical write capacity is probably as aggressive as one should ever get.