No, this is a known issue. See bug #1531. ZFS needs to be carefully tuned for heavy write loads if you don't want the system to go catatonic for (short?) periods. ZFS likes to pile up writes into a "transaction group" (TXG) and then dump it out to storage all at once; if you are piling up data to be written more quickly than your I/O subsystem can actually handle, the ZFS subsystem will helpfully make your applications (in this case, iSCSI or dd) wait until it is flushed. The interactions are nonobvious unless you've worked with this for a while.
First, the overall speed of your drives is much higher than ZFS may be able to actually commit data, especially if you're using something like RAIDZ or RAIDZ2, or to a much lesser extent, mirroring. For example, on a 4-drive RAIDZ2 setup that had drives capable of writing at about 70MB/sec each, the pool was exhibiting horrifying behaviour trying to write data at 70MB/sec to the RAIDZ2.
Second, more memory can make the problem worse, because ZFS sizes its txg groups in part based on memory size. This can be tuned manually, however, so reducing memory is not necessarily a good solution. You don't want to increase the cache size, you want to DECREASE it.
Third, fixing this problem is pretty much impossible due to the design of ZFS, at least as far as I can tell. It can be made tolerable, however, which effectively means reducing the performance of the pool to the point where it isn't trying to cram more data out to the pool than the pool can cope with, AND making the values for frequency-of-flush and size-of-flush fit into an iSCSI-compatible performance envelope. As you noticed, many clients freak when their SCSI devices don't work in a generally reasonable manner.
Fourth, the problem probably gets much more tolerable as the number of devices in a pool increases, because of course the number of IOPS increases dramatically. A mirror is the best you can get for IOPS with data protection, as far as I can tell, so you're already in a good place there.
Fifth, for gigabit-level file service, then, the limiting speed of gigabit will serve as a choke point that artificially enforces a limit on how much data ZFS is being asked to write. Once your pool is able to sustain that level, you've effectively mitigated the problem. This is not the same thing as fixing the problem, however. I suspect there are lots of ZFS pools out there that "work fine" despite having an unreasonably large txg size, and it's because there is some other aspect of the system limiting the actual txg size. Those systems might melt under a dd. You'll want to remember that if you "mitigate" the problem.
Sixth, iSCSI extents... ZFS is a copy-on-write filesystem. That means that if you write blocks 0, 1, 2, 3, 4, and 5 in your iSCSI extent, and then later rewrite block 3, block 3 will be written elsewhere in the ZFS pool, and your blocks of data are no longer contiguous. You probably want to avoid unnecessary small writes to iSCSI extents on ZFS. You probably also want to keep more free space on your ZFS pool than the average ZFS user (normal advice is beyond 75-80% capacity and you start hurting); more free space means that ZFS is more likely to be able to allocate space nearby for written blocks, therefore reducing the impact of nonlocality for sequential reads on your iSCSI device.
Seventh, while ZFS is great and all that, it is worth considering whether ZFS's features are required for what you are trying to accomplish. I've been tending towards thinking that ZFS is not relevant for our iSCSI needs here, or at least not our primary iSCSI needs. FreeNAS supports UFS as well, which seems to work blazingly fast.