Is that in the hardware guide?
I have a feeling it isn't and it is probably one of the sources of the trouble you are having.
Yes, it is. It does say that LSI controllers tend to be stable. It also mentions that if it is a hardware RAID capable controller that you shouldn't use that. You should make all the drives JBOD and let FreeNAS control them, which is what I have done. I tried to take into account what the ZFS primer says. FYI, here is the output of a zpool status.
pool: RAIDZ2-I
state: ONLINE
scan: scrub repaired 0 in 0 days 03:10:01 with 0 errors on Mon Apr 9 18:42:12 2018
config:
NAME STATE READ WRITE CKSUM
RAIDZ2-I ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/bd041ac6-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/bdef2899-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/bed51d90-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/bfb76075-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c09c704a-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c1922b7c-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c276eb75-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
gptid/c3724eeb-9e63-11e7-a091-e4c722848f30 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/a1b7ef4b-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a2eb419f-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a41758d7-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a5444dfb-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a6dcd16f-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a80cd73c-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/a94711a5-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
gptid/aaa6631d-3c2a-11e8-978a-e4c722848f30 ONLINE 0 0 0
spares
gptid/4abff125-23a2-11e8-a466-e4c722848f30 AVAIL
errors: No known data errors
pool: SYS-MiRROR
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:07 with 0 errors on Sun Apr 8 00:00:07 2018
config:
NAME STATE READ WRITE CKSUM
SYS-MiRROR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/3c0e5fc1-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0
gptid/3dd26070-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:04:06 with 0 errors on Sat May 12 03:49:06 2018
config:
NAME STATE READ WRITE CKSUM
SYS-MiRROR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/3c0e5fc1-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0
gptid/3dd26070-a7f1-11e7-8a5c-e4c722848f30 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:04:06 with 0 errors on Sat May 12 03:49:06 2018
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da19p2 ONLINE 0 0 0
errors: No known data errors
This is also not a recommended configuration for virtualization. Are you using iSCSI / block storage? Even so, the workload is likely a lot of small, random writes. Mirror vdevs are called for, to get the IOPS. VMs (ESXi for certain) uses sync writes in an effort to prevent corruption of the VM.
The problem with your write performance is likely caused by the ZFS Intent Log. The function of that portion of the file system is to store sync writes to stable storage before acknowledging the write to the VM. The way this works without a SLOG (Separate LOG device) is that the write is committed to what I think of as a temporary working space on the pool, then the ack goes to the VM, then the write is done again when the data is written to the permanent storage space on the pool when the regular transaction group is committed. This makes everything much slower. If you add a SLOG to the system, it will make it faster. Here is some info on that:
It is defined as an NFS datastore in ESXi. I am starting to wonder if perhaps some of this is issues with ESXi and NFS. I do use the FreeNAS for other things (a few CIFS shares and such). I have 17 drives that are allocated for storage user facing functions. With 2 vdevs of 8 disks each in RAID-Z2, loosely that should give me 75% of the physical space to use. If I understand what you are saying, you are suggesting that I should do 8 vdevs of RAID1. Is that correct? Loosely speaking, that would give me 50% of the physical space to use which is a non-trivial loss. It is a home lab, so I don't need to wring every possible IOP out of it. If my write performance was 33-50% of the read performance as compared to current 14%, I think I would be happy with that. I am going to investigate the ESXi NFS write performance as well as the SLOG. I have enough space available in other places that I can shuffle it around and rebuild things if that is what is required. I also have a fair amount of RAM to work with, and well as solid UPS protection (easily 20-30 minutes runtime, and FreeNAS is monitoring the UPS).