I have the results of some testing I have done. First I'll describe the files servers. These are creatively named freenas and freenas2. 
 
freenas2 is my primary one.
Cisco UCS C240 M3S
FreeNAS 11.1-U4
Dual E5-2660 v2 CPU @ 2.20GHz
128G ECC RAM DDR3-1333
Chelsio T520-CR 10G-SR SFP+
t5nex0: <Chelsio T520-CR> mem 0xfb300000-0xfb37ffff,0xfa000000-0xfaffffff,0xfbb0
4000-0xfbb05fff irq 64 at device 0.4 numa-domain 1 on pci14                     
cxl0: <port 0> numa-domain 1 on t5nex0                                          
cxl0: 16 txq, 8 rxq (NIC)                                                       
cxl1: <port 1> numa-domain 1 on t5nex0                                          
cxl1: 16 txq, 8 rxq (NIC)                                                       
t5nex0: PCIe gen3 x8, 2 ports, 18 MSI-X interrupts, 51 eq, 17 iq                
Drives controlled by LSI 9271-8i in JBOD mode
AVAGO MegaRAID SAS FreeBSD mrsas driver version: 06.712.04.00-fbsd                                                                  
mrsas0: <AVAGO Thunderbolt SAS Controller> port 0xf000-0xf0ff mem 0xfbc60000-0xfbc63fff,0xfbc00000-0xfbc3ffff irq 56 at device 0.0 n
uma-domain 1 on pci13                                                                                                               
mrsas0: Using MSI-X with 16 number of vectors                                                                                       
mrsas0: FW supports <16> MSIX vector,Online CPU 40 Current MSIX <16>                                                                
mrsas0: MSI-x interrupts setup success                                                                                              
17 x Cisco 1TB 7.2K SATA drives in internal drive bays
da0 at mrsas0 bus 1 scbus1 target 27 lun 0                                      
da0: <ATA ST91000640NS CC03> Fixed Direct Access SPC-4 SCSI device              
da0: 150.000MB/s transfers                                                      
da0: 953869MB (1953525168 512 byte sectors)                                     
ZFS Pool where VMware virtual machines are stored shared via NFS v3
        NAME                                            STATE     READ WRITE CKSUM                                                  
        RAIDZ2-I                                        ONLINE       0     0     0                                                  
          raidz2-0                                      ONLINE       0     0     0                                                  
            gptid/bd041ac6-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/bdef2899-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/bed51d90-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/bfb76075-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/c09c704a-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/c1922b7c-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/c276eb75-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/c3724eeb-9e63-11e7-a091-e4c722848f30  ONLINE       0     0     0                                                  
          raidz2-1                                      ONLINE       0     0     0                                                  
            gptid/a1b7ef4b-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/a2eb419f-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/a41758d7-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/a5444dfb-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/a6dcd16f-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/a80cd73c-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/a94711a5-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
            gptid/aaa6631d-3c2a-11e8-978a-e4c722848f30  ONLINE       0     0     0                                                  
        logs                                                                                                                        
          gptid/6f76bc3b-5aee-11e8-8c41-e4c722848f30    ONLINE       0     0     0                                                  
        spares                                                                                                                      
          gptid/4abff125-23a2-11e8-a466-e4c722848f30    AVAIL                                                                       
SLOG - Intel Optane 900P 280G
nvme0: <Generic NVMe Device> mem 0xdf010000-0xdf013fff irq 40 at device 0.0 numa-domain 0 on pci5                                                               
-----------------------------------------------------------
freenas is the secondary box that I don't use all that much. I may try and set up some automated replication to it at some point, but haven't been inspired to do so yet.
Cisco UCS C240 M3S
FreeNAS 11.1-U4
Dual E5-2620 CPU @ 2.00GHz
64G ECC RAM DDR3-1333
Chelsio T520-CR 10G-SR SFP+
t5nex0: <Chelsio T520-CR> mem 0xfb300000-0xfb37ffff,0xfa000000-0xfaffffff,0xfbb0
4000-0xfbb05fff irq 64 at device 0.4 numa-domain 1 on pci14                     
cxl0: <port 0> numa-domain 1 on t5nex0                                          
cxl0: 16 txq, 8 rxq (NIC)                                                       
cxl1: <port 1> numa-domain 1 on t5nex0                                          
cxl1: 16 txq, 8 rxq (NIC)                                                       
t5nex0: PCIe gen3 x8, 2 ports, 18 MSI-X interrupts, 51 eq, 17 iq                
HP D2700 external enclosure with 25 SAS/SATA slots controlled by LSI 9207-8e. The D2700 has two controllers, and each one is connected to a port on the 9207-8e
mps0: <Avago Technologies (LSI) SAS2308> port 0x7000-0x70ff mem 0xdf140000-0xdf14ffff,0xdf100000-0xdf13ffff irq 32 at device 0.0 num
a-domain 0 on pci3                                                                                                                  
mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd                                                                               
mps0: IOCCapabilities: 5a85c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,MSIXIndex,HostDisc>                         
ses0 at mps0 bus 0 scbus0 target 33 lun 0                                                                                           
ses0: <HP D2700 SAS AJ941A 0149> Fixed Enclosure Services SPC-3 SCSI device                                                         
ses0: 600.000MB/s transfers                                                                                                         
ses0: Command Queueing enabled                                                                                                      
ses0: SCSI-3 ENC Device                                                                                                             
ses1 at mps0 bus 0 scbus0 target 59 lun 0                                                                                           
ses1: GEOM: da0: the secondary GPT header is not in the last LBA.                                                                   
<HP D2700 SAS AJ941A 0149> Fixed Enclosure Services SPC-3 SCSI device                                                               
ses1: 600.000MB/s transfers                                                                                                         
ses1: Command Queueing enabled                                                                                                      
ses1: SCSI-3 ENC Device                                                                                                             
25 x 300G 10k SAS drives in the D2700 look like this.
da5: <HP DG0300FAMWN HPDF> Fixed Direct Access SPC-3 SCSI device                
da5: 600.000MB/s transfers                            
da5: Command Queueing enabled                                                   
da5: 286102MB (585937500 512 byte sectors)                                      
ZFS Pool where VMware virtual machines are stored shared via NFS v3
        NAME                                            STATE     READ WRITE CKSUM                                                  
        TEST2                                           ONLINE       0     0     0                                                  
          raidz2-0                                      ONLINE       0     0     0                                                  
            gptid/07379c8a-60ef-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/09d7ac93-60ef-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/0c7f06f4-60ef-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/0f28ba50-60ef-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/11cb5ce8-60ef-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/1478482b-60ef-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
          raidz2-1                                      ONLINE       0     0     0                                                  
            gptid/a2e860a8-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/a59c35e2-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/aa159c4a-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/acd5a2e2-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/af81bbb2-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/b23b1e54-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
          raidz2-2                                      ONLINE       0     0     0                                                  
            gptid/f486e8de-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/f73b2f3f-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/f9eabce3-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/fc9f41a5-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/ff7e9906-60f5-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/029453a0-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
          raidz2-3                                      ONLINE       0     0     0                                                  
            gptid/084c70aa-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/0aff24d6-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/0dbd832b-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/10670ef6-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/1327e8dc-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
            gptid/15e5089e-60f6-11e8-bc93-f44e059f1a98  ONLINE       0     0     0                                                  
        spares                                                                                                                      
          gptid/1be4166c-60f6-11e8-bc93-f44e059f1a98    AVAIL                                                                       
SLOG drive(s) controlled by LSI 9271-8i
AVAGO MegaRAID SAS FreeBSD mrsas driver version: 06.712.04.00-fbsd              
mrsas0: <AVAGO Thunderbolt SAS Controller> port 0xf000-0xf0ff mem 0xfbc60000-0xf
bc63fff,0xfbc00000-0xfbc3ffff irq 56 at device 0.0 numa-domain 1 on pci13       
mrsas0: Using MSI-X with 16 number of vectors                                   
mrsas0: FW supports <16> MSIX vector,Online CPU 24 Current MSIX <16>            
mrsas0: MSI-x interrupts setup success                                          
SLOG using RAID1 with ST9146803SS drives (146G SAS 10K):
da50 at mrsas0 bus 0 scbus1 target 0 lun 0                                                                                          
da50: <LSI MR9271-8i 3.46> Fixed Direct Access SPC-3 SCSI device                                                                    
da50: 150.000MB/s transfers                                                                                                         
da50: 139236MB (285155328 512 byte sectors)                                                                                         
SLOG using RAID1 with ST9300653SS drives (300G SAS 15K):  *** This is RAID0 in tests with VM-UCS2 because I don't have enough 300G 15K SAS drives
da51 at mrsas0 bus 0 scbus1 target 1 lun 0                                      
da51: <LSI MR9271-8i 3.46> Fixed Direct Access SPC-3 SCSI device                
da51: 150.000MB/s transfers                                                     
da51: 285148MB (583983104 512 byte sectors)                                     
-----------------------------------------------------------
I have 2 ESXi hosts that have enough local storage to hold the 3 VM's that normally have running all the time. Those a FreeBSD box (my mail server), Vcenter (small), and APC PC Network Shutdown (CentOS).
VM-DL360-G7
HP DL360 G7 - ESXi 6.0.0 - Dual X5672 CPU - 72GB RAM - P410i RAID - 4 x WDC WD7500BPKT RAID5
Mellanox Technologies MT26448 [ConnectX EN 10GigE , PCIe 2.0 5GT/s]
VM-UCS2
Cisco UCS C220 M3S - ESXi 6.5.0 - Dual E5-2670 CPU - 128G RAM - LSI 9266-8i RAID
UCS VIC P81E 10G NIC
-----------------------------------------------------------
Network switch is an HP S5820 (really an H3C OEM'd for HP). All connections between hosts and switch are 10G SR SFP using OM-3 fiber
-----------------------------------------------------------
Enough table setting, and on to the tests.
DL360-G7 client, freenas (secondary) server
No SLOG: 6.0G down, 342M up (peak) with ~= 250M avg up
Cisco 146G 10K SAS RAID1 SLOG: 6.0G down, 474M up (almost constant)
Cisco 300G 15K SAS RAID1 SLOG: 6.0G down, 730M up (almost constant)
DL360-G7 client, freenas2 (primary) server
6.6G down (peak) with ~= 5G avg, 4.9G up (peak) with ~= 4G average
VM-UCS2 client, freenas (secondary) server
No SLOG: 7.6G down (peak) 7.2G avg (sustained), 344M up (peak) ~= 212M avg (flucuates)
Cisco 146G 10K SAS RAID1 SLOG: 7.4G down (sustained), 637M (peak) up ~= 432M avg (sustained)
Cisco 300G 15K SAS RAID0 SLOG: 7.5G down (sustained), 847M (peak) up ~= 760M avg (sustained)
VM-UCS2 client, freenas2 (primary) server
8.3G down (peak) 7.8G avg (sustained), 4.2G up (peak) ~= 3G avg (sustained)
-----------------------------------------------------------
First off I have to say how impressed I am at the read performance FreeNAS is able to deliver even when using some older drives/controllers. Next, I was astonished by how much difference the SLOG made for NFS write performance. I know most people prefer iSCSI for backing VMware data stores, but I use NFS because I understand it better as a grumpy old Unix guy. I am mostly a network infrastructure person these days, so there isn't a huge motivation to learn iSCSI if I can get NFS to do what I need. I am going to try this again soon as I bought some used S3500 drives to use as an SLOG on the secondary host. I can certainly run some other tests if anybody would like. I know the storage Vmotion stuff isn't the most objective measure, but that is the thing that matters most to me so it seemed the ideal test.
One thing that puzzles me a little bit is how large the difference in write performance is between the 146G 10K drives and 300G 15K drives. How much is rotational speed, and how much is that the drive is larger? I certainly don't understand the architecture of ZFS well enough to answer that, but I am curious.