I don't claim to be any kind of expert, but iI've been doing what you're doing for a while. If I understand your configuration, my opinion is probably yes, that looks right-ish.
How do you have those drives arranged into pools? Are you doing mirrors pairs for those two iSCSI stores? Not sure what controller you're using either.
Crossing into iSCSI storage for VMs (typically a heavy random workload), things get complicated.
I'm not sure if your system in it's current configuration is built well for heavy SYNC IO? I don't see you listing anything looking like a SLOG. Remember that SYNC writes have to be committed to non-volatile storage somewhere, so if you aren't providing a SLOG, then it's going to the default ZIL location in the pool. That usually slows things down. Many posts about this topic on forum. See insights to SLOG post by jgreco. I'm guessing this is the area you need to study and evaluate.
Something else to consider, people around here beat up bench marks like these. You're testing sequential reads/writes, which can sometimes show the max potential performance of the hardware and software in the configuration you have, but it's probably not anywhere close to real performance expectation for VMs. Maybe you already realize that. If you are trying to gauge how the storage will perform for your VMs, you probably want to switch to random IO testing.
I'm running a hand full of VMs that cover every function you could expect (database, media, etc.), and they don't usually tax the system in the same way you are. If I run a sequential, my pool of 20 old terrible disks in groups of Z1 can still do ~500MBps. Probably just a function of having so many. If I did them in mirrors like you're supposed to, it would probably be better performance, I just wouldn't have enough space.
If you want to test real performance, throw a couple VMs on there and generate real workloads, then use your gstat/zilstat/zpool iostat tools to gauge whats going on. That might give you better data.