Updated approach to my VMware NFS use of FreeNAS 9.2.1.7.....
I finally understand more about ZIL (SLOG) etc.. and so I bought
1) Dell XPS 8700 - Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz, with 32GB RAM.
2) 2 x 500 GB Samsung EVO SSDs -
Samsung 850 EVO 500GB 2.5-Inch SATA III Internal SSD (MZ-75E500B/AM)
3) 1 x 120 GB Kingston for ZIL -
Kingston Digital 120GB SSDNow V300 SATA 3 2.5 Solid State Drive (SV300S37A/120G)
Steps:
1) Used web GUI to configured the 2 x 500 GB SSDs as Mirror zpool with lz4 (initial default compression 6.58x).
2) Used web GUI to configure the 1 x 120GB Kingstone as ZIL (Pool) - and then used GUI to detach it (leaving formatting in place). *I understand the ZIL does not need nearly 120GB, but it was only $50 and I have no other use for that SSD.
3) I used command line to attach the ZIL to the zpool
# zpool status
pool: aeraidz
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
aeraidz ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/cd5048df-b7ec-11e6-8d53-6805ca4185e3 ONLINE 0 0 0
gptid/cd6475c8-b7ec-11e6-8d53-6805ca4185e3 ONLINE 0 0 0
logs
ada1p2 ONLINE 0 0 0
4) I then setup NFS Share and mounted to my ESXI 5.5.0 hosts - with sync=default
[root@aenas3 /]# zfs get sync
NAME PROPERTY VALUE SOURCE
aeraidz sync standard default
aeraidz/.system sync standard default
aeraidz/.system/cores sync standard default
aeraidz/.system/rrd sync standard default
aeraidz/.system/samba4 sync standard default
aeraidz/.system/syslog sync standard default
5) Then I did a migration - of a 160GB (Provisioned, 72GB Used Storage) VM and WO HO... I have 90MBs / second write.....
5a) FreeNAS network .... almost fully saturated 1GB network
5b) ada1 = ZIL. ada2/ada3 = Mirror zpool
5c) Here's the VMware write performance in KBps of the migration
6) I did additional migrations and experimented with sync=disabled, and sync=always but it didn't materially change the write performance on other VMware Migrations.
Another data point - I have another FreeNAS box
- Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz with 8GB RAM.
- Single (striped zpool) Samsung 850 EVO 500GB SSD.
- NFS to VMware
With sync=disabled (very risky) I get 90GB /sec BUT only 7GB with sync=always. 7GB/sec is just too slow for practical VMware use.
CONCLUSION:
- It looks lik" the ZIL really works for the VMware / NFS case. Amazing after a couple of years of fooling with this that a simple ZIL addition seems to have made the difference between usable performance and non-usable performance for VMware on NFS.
- I think I have a fully 'sane' FreeNAS setup with sync=standard and reasonable VMware NFS performance. And with compression, that 500GB easily extends to 1TB+.... of space for my VMs.
I'd be interested in comments - particularly if I have not actually achieved an OK/Safe (e.g. ZFS meta data preserved, sync=standard) solution for an adequate level of VMware NFS performance.