As cyberjock has repeatedly indicated to me, people will see it as a license to go ahead and do it and ignore one or more critical bits.
You really need some familiarity with both FreeNAS and ESXi in order to have a reasonable chance of success. You really need to understand the issues I outlined so that you can avoid them. I think there's probably a lot of room for improvisation, but there is also room for disaster.
For example, if you don't understand that ESXi might helpfully format all attached storage it sees as vmfs datastores during install, then you might make the mistake of setting up and transferring data to a FreeNAS system, dropping in a RAID controller for some datastores, then install ESXi, and find ESXi has helpfully just destroyed your pool. I don't think I even warn about that possibility, it's just something that I expect admins would be aware of.
But really, there's a history on the forums of people virtualizing their FreeNAS in what I would consider to be various ways that seemed to be clever (to them). And then things going wrong. So you make a virtual disk file on a bunch of ESXi datastores, thinking that you can then "ZFS" them together into a big pool. Well, yes, you can, and it works brilliantly. Until one day one disk starts to fail and ESXi gets tetchy about it (ESXi ... synonym for "not horribly tolerant of hard faults) and all you can tell at the VM level is that things seem slow. Then the disk dies, a week later. Now ESXi freaks and hangs the VM while trying to do disk I/O on its behalf. In the meantime you cannot even boot your FreeNAS VM because one of the datastore resources upon which it depends are not available. And replacing it? Wow, that becomes a bit of a logistical challenge, because you have to manage both VMware aspects and FreeNAS ZFS aspects to the recovery. So you screw it up, inadvertently messing up another disk on your RAIDZ1, and your pool is toast, and your data, maybe gone forever.
That's not even particularly clever. It is a straightforward, "obvious" implementation of FreeNAS with ZFS on top of ESXi in the manner that even an experienced admin might attempt. It will appear to (and indeed actually WILL) work great. Right up to the point where you would normally be getting SATA S.M.A.R.T. errors from the hardware that are trying to warn you of an imminent drive failure, which ESXi helpfully masks from you. Then all hell breaks loose, because most admins simply don't have experience dealing with complicated VM's with vmdk files on multiple nonredundant datastores.
And there are other risks. Going the PCI-passthrough route is a promising way to avoid some of the layering issues, but needs server-grade hardware, and relies on the motherboard, controller, ESXi, and the OS to all work together to support it.
Clevering oneself to death: Get a machine. Install ESXi. Install FreeNAS and a bunch of VM's on an NFS datastore hosted elsewhere. Configure FreeNAS to serve up NFS, thinking "I'm going to use ZFS to protect my VM's ... ZFS has great error protection features". True, dat. Set up FreeNAS to serve up an ESXi datastore. This *works*. Then storage vmotion all the VM's onto that datastore. This also *works*. Time passes. Now reboot. It's all gone, because you vmotion'ed the FreeNAS VM onto storage being provided by FreeNAS... wot a mess.