COW is, very generally speaking, probably a poor design decision for an iSCSI storage platform based on standard hard drives.
I'll note that there are various things that can be done to make the problems related to this better. I'm not here to discuss those, I already know about them. I've spent a lot of time looking at the various problems FreeNAS users have with ZFS and iSCSI. It is certainly possible to get a system that performs reasonably well, but there are some significant caveats. You can look at the results of all the time I spent screwing around with txg buffer sizing in bug 1531, for example. You can imagine that I have similar experience messing with this.
With specific respect to COW and the point I've made in the past: an iSCSI device is a virtual hard drive. An initiator sees it as a linear contiguous array of blocks, and generally speaking, we've spent several decades writing filesystems that treat them as such, and optimize to handle them as such.
But let's ignore that for a minute, and investigate the underlying issue. You create a 1TB file for iSCSI (appx 2 billion blocks). You mount it on a client and then you read it with dd on the raw disk device. William gets his 800MB/sec. Great. You proved your point.
Now you give me the client. I go in and write one million randomly selected blocks. What you're going to find is that the blocks in the ZFS extent file are no longer generally contiguous, and as a result of additional seek delays, you get something less than 800MB/sec doing that same read test.
So. Here is another troubling data point. ZFS would love to allocate those new blocks "nearby" in order to reduce seek time, yes? Of course it would. However, if you investigate the users who come to the forum, there is a fairly common expectation that they have a NN GB ZFS volume and that this should mean that they can have an NN GB iSCSI extent. This is foolish, of course, let's agree on that point right away. Anyone who has spent any time with ZFS knows that ZFS really needs to be kept at less than 80% or performance degrades horribly (in part due to what we're talking about!). However, experimentation suggests that this issue is substantially worse with iSCSI; in order to reliably allocate blocks sufficiently nearby to minimize the issue and allow ZFS a plentiful bucket of blocks to work with, it appears to me that a better number might be more like 60%, or even 50% capacity, which is a lot of space to hold in reserve.
But even so, eventually there's a large amount of fragmentation that has to be dealt with as an iSCSI extent file ages, the number of random writes increases, etc. This can hurt performance.
So one of my common talking points is about making VM's that are designed for the virtual environment, rather than just assuming that they have their own physical resources. Pretty much every UNIX system likes to do trite writes, stuff like atime updates, which add approximately zero value. If that filesystem is mounted on an iSCSI disk and underlying it on the server is ZFS, I can cause a bunch of writes just by READING files (causing their atimes to all be updated) and suddenly the contiguous blocks that made up my filesystem are no longer contiguous. Nice! What about when someone rebuilds ports? Or does a makeworld? ("HORROR!") It isn't the single reallocation one time that's a problem. It's the cumulative effect that this has on an extent file over the period of months or years.
Anyways, COW brings some interesting capabilities to the table, no doubt, but in many cases those capabilities are not needed. I've been leaning towards implementing iSCSI on top of UFS based file extents, which is of course limiting in several ways. However, the performance characteristics don't degrade in the manner ZFS based file extents do over time, and the dynamics are trivial to understand, unlike ZFS, which has been a relative nightmare of design decisions that are not-quite-right for iSCSI uses.