Here's some stuff to think about...
RAID10 really stores 2 copies of your data. All reads and writes really go at the speed of, at best, a RAID0 of 2 disks. It's a little more complex than that, but bear with me. If each disk can do a theoretical 75MB/sec then you aren't going to beat 150MB/sec. I believe this phenomenon is why your RAID10 dd test was so low.
RAIDZ2 of 4 drives on the other hand works differently. Your data along with parity data is striped together. At first you'd assume that you'd still be limited to the same 75MB/sec then you aren't going to beat 150MB/sec, but that's actually not true(and your test has proven this). Each disk will be reading parity data along with the actual data. ZFS' prefetching will cause your disks to read the parity data as well was "read ahead" some sectors that will contain actual data. So you do kinda-sorta get all 4 spindles worth of bandwidth. I say kinda-sorta because there is some added complexity so it's not completely true, but I think you get the idea.
Now, when you start doing stuff with iSCSI you are probably going to have latency issues. It seems that everyone does because nobody reads up on the topics that ZFS + iSCSI = non-trivial to get good performance with(kudos for already figuring this out). For iSCSI on ZFS, a RAID10 should provide some relief of latency over RAIDZ2 for disk writes since there is no need to read/calculate/write parity data. Your G2020 isn't exactly a high performing CPU so you may have additional latency with parity data calculations with that CPU and a RAIDZ2 whereas a RAID10 has no parity calculation penalty but you instead are limited by the number of spindles you have.
If you've fallen asleep already you need to keep this in mind... the benchmarks you did provided theoretical maximum speed for sequential reads and writes in a non-networked environment. The loading you are going to get from iSCSI is nothing like a dd command. ZFS is a CoW file system and along with other things kind of makes the benchmark you did somewhat useless for iSCSI purposes. What you really need to do is try one configuration and if you have problems with it then try the other. Or look at adding more hardware(RAM, ZIL, L2ARC, Intel NIC, etc.) or switching to a UFS RAID(I believe those are supported in FreeNAS, I'm not a UFS user myself). Don't let some hypothetical benchmark values be your deciding factor because those aren't necessary going to reflect reality. The game isn't to have a big epeen because you have an awesome benchmark values. The game is for the FreeNAS server to be able to perform well enough for you to be happy.
My guess is that a RAID10 is more likely to work for you for iSCSI than a RAIDZ2 despite the fact that the benchmarks make a RAIDZ2 seem to be the very logical choice on the surface. All you can do is test, test, test and then test some more. This is why alot of people hate ZFS and some of the more popular ZFS tuning guides are called "ZFS Evil Tuning Guide". There's lots of tradeoffs between bandwidth, latency, available CPU horsepower, ZFS efficiency, and more. If you go outside of a small range of uses then you will be forced to learn how all these tradeoffs work and figure out how to get a happy medium between all of them. It might be a few simple tweaks to fix it for you, or it might require all new more powerful hardware. Each admin's needs are slightly different which is why some admins get paid alot of money to figure all this stuff out.
Good luck!