AAh. You have a RAID controller with on-card RAM. Based on my testing with 3 different RAID controllers that had RAM and benchmark and real world tests, here's my recommended settings for ZFS users:
1. Disable your on-card write cache. Believe it or not this improves write performance
significantly. I was very disappointed with this choice, but it seems to be a universal truth. I upgraded one of the cards to 4GB of cache a few months before going to ZFS and I'm disappointed that I wasted my money. It helped a
LOT on the Windows server, but in FreeBSD it's a performance killer. :(
2. If your RAID controller supports read-ahead cache, you should be setting to either "disabled", the most "conservative"(smallest read-ahead) or "normal"(medium size read-ahead). I found that "conservative" was better for random reads from lots of users and the "normal" was better for things where you were constantly reading a file in order(such as copying a single very large file). If you choose anything else for the read-ahead size the latency of your zpool will go
way up because any read by the zpool will be multiplied by 100x because the RAID card is constantly reading a bunch of sectors before and after the one sector or area requested.
If you deviate from these settings ZFS performance tanks BADLY from every experience I've had(and everything I've read). ZFS likes to make its own decisions on when to write and read to the disks. It seems to be very smart about not trying to do both at the same time(seek times are a PITA). Having a second device that starts making those decisions(your RAID card in particular) is really bad news because you are almost guaranteed to have both at the same time. What happens is the RAID card tries to flush its write cache at the same time ZFS tries to do reads. Then performance tanks and it gets harder and harder to get it back without waiting for the zpool to go completely idle(ZFS and RAID controller cache flushed). For me, just disabling write cache more than doubled my zpool performance. Changing the read-ahead more than doubled it again! I went from about 130MB/sec on my personal zpool to over 1GB/sec. I wasn't happy when a 30TB zpool wanted to scrub at 130MB/sec so I had to figure out what the problem was(67 hours for a scrub and you can't even stream 1 movie at the same time was unacceptable). Turns out the RAID card settings made all the difference. Who'd have thunk it? After all, everything you read about ZFS says to use "dummy" SATA controllers and never use RAID controllers(which you and I are both using). I actually removed the battery backup from my server and sold it to a friend that uses Windows still. It serves no purpose if you have write cache disabled.
There is no zfs_nocacheflush sysctl for FreeBSD. The equivalent is vfs.zfs.cache_flush_disable(default is "0" as verified by
sysctl -a | grep vfs.zfs.cache). Remember in your Googling travels that Solaris' settings are different from FreeBSD. The theoretical function and recommended use/non-use cases are
usually correct(but not always, and should
always be verified). This only adds to a lot of confusion for people that haven't been tweaking FreeBSD for years(raises my hand) and makes it even harder to determine what tweaks work and what don't(after all.. if a sysctl doesn't exist in FreeBSD how do you prove it easily?).
Before you go trying that setting, my advice is to read this
topic and then don't do it for any reason except for testing(and have good backups.. I think I've said that before to you though). Even if you try it and it works great(yes.. I think it will make you very happy performance-wise) I don't think it'll "teach" you anything except that not performing a "latency intensive task" will make the server so much faster. (Well, no duh!) I don't even try to tweak my systems because statistically I have a far higher chance of kissing my data goodbye or actually hurting performance than helping it. I've yet to see a noobie show up in the forums and their tweaks actually be well thought out(and actually work). It's not for the faint at heart, and the consequences can be the zpool is corrupted beyond repair/mounting. Probably 95%-99% of the time when someone complains about poor performance I have told them to delete their "performance enhancing tweaks" (if they had them) and by following this direction they saw performance increase. The other downside is that if corruption does begin to happen you may not even know it until your next scrub(assuming a scrub is capable of finding the corruption), so the storage needs of your backups will also go way up. This tweak has the universal result of causing major performance increases, but at the cost of major data loss suckage.
You are really on a losing battle performance-wise because you chose to go with a vdev type that requires redundancy calculations. That forces a stripe to be read(if not already in the read cache), the change made in memory, the redundancy calculations be performed, then the data written for each sync write. I think(but don't quote me on this) that using a ZIL will make the write to the ZIL without the extra latency added I just mentioned(no stripe read, redundancy calculations made, etc). I have no idea how much it will "help" your numbers. It might double, it might go up 10x. I really don't know. The ZIL(and L2ARC for that matter) are extremely complex. They aren't your typical dummy write cache like you see on your RAID controller. The ZIL only caches certain writes and only in certain circumstances(It's up to the advanced server admin to adjust the tweaks if necessary to get the most value out of the ZIL). Everything else is written directly to the zpool and there's not a darn thing you can do about it(without even more tweaking).
There is a tradeoff between performance and reliability and you are certainly hitting that limit. This is why iSCSI isn't recommended on ZFS. There is no "magic wand" to fix it either. ZFS pushes the reliability "all the way to 11", which means performance will take a nosedive. That's why so much RAM is used by ZFS to help make performance even remotely manageable.
Personally, if I were in your shoes I think I'd abandon trying to tweak your FreeNAS server and look at getting a ZIL(and using the RAID card settings I mentioned above). Normally I'm a big advocate against ZILs because everyone adds them without doing lots of homework. I've read everything I could find on ZILs(and L2ARCs) because I find them fascinating how they work but I don't even feel confident I could make a 100% certain choice on when to use or not use them. If a ZIL won't solve your problem then the next step I'd try is going to mirrors instead of RAIDZ(x). This may not work well for you because you will lose a total of 50% of all of your disks to redundancy, but it may be the only way aside from doing very very dumb things. If neither of those things help, you might be forced to go to UFS and give up on ZFS completely. In all honesty, I've had great luck with my Intel SSDs(I don't use any other brands) and I would feel confident that an array of Intel SSDs on a RAID would be fairly safe(but definitely keep backups!). The lack of moving parts really increases reliability, but you do have limited write cycles for SSDs. I've never used UFS on FreeNAS, so I don't know what dirty tricks and/or limitations you may run into in your situation.
I don't know if you've read
bug report 1531, but I'd give it a full read(not a skim) and you'll see jgreco(he's the iSCSI on ZFS problem god) and you can see some of the stuff he did and lessons he learned. I think that that "big ticket" items have been mentioned to you already, but I'd definitely read it before spending countless hours with Google.
Not to be pedantic, but you are probably making a bigger mess with multiple threads for a single problem. I realize this really conflicts with the "one question per thread" philosophy(aka the forum rules), but you've had so many threads created in the last few days for your one problem I can't even remember what hardware you had and what questions you had. You might be better off if you had stuck to one thread as you may start getting conflicting answers (which will only confuse you even more) because nobody knows what was said in the other threads. I'm going to see if I can merge all of your applicable threads regarding this issue back into 1 for your own benefit.
I do give you props for your determination in trying to fix your problem!