No, iSCSI (at least with file extents) is (at least by default) async.
You basically have a few choices here. VMware isn't being totally stupid. Their interest is in maintaining the integrity of your VM's even in the event of faults and failures. So on the high end, storage systems actually implement battery backed write caches and other various strategies to be able to acknowledge sync write options. Low end hard-drive based NAS engines that lack BBU will either simply disregard the sync requests (and go fast) or honor it (and go real slow). But either way, that isn't really VMware's fault. VMware really has no clue about WHAT its VM's are trying to write to disk or how important it might be or how resilient it might be in the event of a crash, so it is a rational choice.
Pushing everything with sync writes is effectively a giant exercise in measuring latency in your I/O subsystem. ZFS is big and piggy but has features that are designed to at least offset some of those downsides. If you don't use those features, ZFS very much resembles a low end hard-drive based NAS. Your choices in that case end up being, arrange to ignore sync and accept some possibility of data loss, or write sync and go real slow.
But with ZFS, you do have the option to use ZFS's mechanism for guaranteeing sync writes. This is the functional equivalent of a BBU write cache on a high end NAS. It requires you to have a SLOG device with supercapacitor or other similar power-loss write-completion protection, and then you successfully accelerate NFS in sync mode.
For iSCSI, you have to actually TELL ZFS that you want the writes to be sync (and you probably should), by setting sync=always.
So what's the difference between iSCSI with sync=standard and NFS with sync=disabled? One of them is not disabling sync within ZFS... some people fear sync=disabled could lead to filesystem loss or corruption within ZFS. You can do your own research on that one. So the NFS route may put the ZFS filesystem itself at risk, but the iSCSI route does not.
However, in both cases, you must be able to guarantee that data your VMware host writes is actually written to disk. You can do this through a SLOG device, or by taking the performance hit of writing to the pool with sync. If you choose not to make such guarantee, then what happens is when some adverse event occurs, your VM thinks that it has updated some blocks on disk, those disk blocks are in your NAS's memory to get written out to disk, the NAS reboots, never writes them, and suddenly the VM disk is inconsistent with what the VM thinks ought to be out there, and much hilarity ensues.