Sure. Consider the design of the system, and then stuff slots accordingly. This will probably require you to get out your mainboard manual to look at a block diagram of how each system is built.
For example, while you might think "oh my SLOG really needs fastest and mostest PCIe" (heh) the reality is that ZFS probably cannot lock-step things to your SLOG at anywhere near the maximum speed the SLOG device can support. It might actually be your lowest throughput device.
Consider the data flows within the system. Pick the biggest one, which is *probably* the network card in your example, and give it the best slot. You would like something with direct PCIe to the CPU for your most intense workloads. Then work downwards from there.
The throughput available on PCH PCIe based slots will, for example, be shared and somewhat lower than on CPU PCIe based slots. The latency won't be that much worse, though, so attaching your L2ARC devices there could make sense if your L2ARC isn't that stressy and most of your reads are served from primary ARC. However, because SLOG is incredibly sensitive to latency, if you're doing sync writes, you'd probably want to see if you can get SLOG onto CPU PCIe.
There is no one right answer. You need to consider what normally goes on in your systems and optimize for that.