I would use the 8 SATA ports for your storage drives. For boot, either mirrored USB drives, or another small SSD on an adapter card. SLOG will be necessary since you are planning to host VMs, unless you are okay with the potential of data loss/rolling back to a previous snapshot.
Normally, it's been recommended not to use the same device for L2ARC and SLOG, because the two workloads are very different and metaphorically speaking tend to "step on each other's toes" and cause performance for both to be poor - but in the case of the Optane drives with their significant increase in bandwidth and low-queue-depth performance, it might be possible to actually get away with this. Your SLOG partition doesn't need to be anywhere near that big though - 4GBs should be fine, but the generally accepted size is 8GB. With 64GB of RAM, L2ARC could safely be 192GB or even 256GB - you can adjust this on the fly after pool creation. Whether or not you actually need the L2ARC is a different question though, but based on the proposed use case you might actually benefit from having some of that data be "hot."
There's probably some room for big wins based on record size for those SQL and Oracle DBs as well.
For networking you don't actually mix link aggregation and iSCSI - you set them up as multiple independent links in multiple subnets, and then use MPIO to enable the multipathing. To get the full utilization you'd need four NICs in each host though.
https://pve.proxmox.com/wiki/ISCSI_Multipath
As a note on the "ZFS over iSCSI" page at ProxMox, it says "Note: iscsi multipath doesn't work yet, so it's use only the portal IP for the iscsi connection." - so you may want to use a regular iSCSI volume and manually created QCOW2 images with a tuned recordsize. Default internal blocksize in ProxMox is 64K though which is not good for performance. The linked blog below actually goes into a bit of detail about this and benchmarking of ZVOL vs QCOW2 images.
http://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/