And that's precisely what 9.3 takes care of. It will make sure that the file systems are quescent at the moment of the snapshot. Josh Paetzel is taking care of that because we've seen users with this problem and the only "good" solution is to manage this entirely on the FreeNAS/TrueNAS server. Even things like Veeam work fine, but there's no way to be 100% sure that Veeam has quiesced the file system before the ZFS snapshot takes place since Veeam has no way of communicating with FreeNAS. So FreeNAS will talk to ESXi directly to make everything all p-rrty and stuff. ;)
The latter half of your message makes no sense; I'm not sure what role you think Veeam is playing there. The first half is merely strange.
Okay, now, here's the problem. This paragraph is background for the audience; I think and hope you already understand it. So you have a virtual machine disk file being served up via NFS. We'll call it a vmdk. So right at >< this moment you take a snap while running, which gives you an inconsistent disk image because, y'know, some stuff might have been in flight or not fully committed and who knows what the OS does. If you look at the snapshot, it looks like the image of a disk from a machine someone shut down hard while it was running. It may be recoverable via fsck/chkdsk/whatever, or it may not. In an OS that carefully writes its metadata out in an orderly fashion, it ought to be recoverable, but, y'know, real world and all that.. This is the conventional "snapshot" problem. So VMware introduced a sync driver component to VMware Tools, which allows the hypervisor to ask the VM to quiesce the disk(s). The OS receives a quiesce request and is then supposed to flush dirty buffers, and/or anything else necessary to make the filesystem consistent and suitable for backup. This operation necessarily causes (in simplest form) I/O to be paused within the VM in order to allow the hypervisor time to execute a snapshot, and a corresponding resume operation will return things to normal operation. Note that there are actually a few different mechanisms for the quiescing operation, including the sync driver, the vmsync driver, or Microsoft VSS. But let's keep this simple.
During that quiesced state, it is safer (not necessarily safe, merely safer) to make a snapshot and you're more likely to get a consistent disk image. But there are caveats. The biggest ones:
1) The VM has to be running appropriate drivers and an OS that is agreeable to quiescing.
2) The VM's I/O has to be of a nature that it can be quiesced for a period of time; many busy server VM's do not qualify.
But now comes a more interesting issue. Let's take it for granted that you have a VM where you can actually do this. Lots of VM's can. But that's the problem. Your typical datastore and hypervisor do not service a single VM. You might have a hundred VM's on that datastore.
Doing the snapshot at the datastore level requires that *all* the VM's be quiesced, which basically introduces a new issue, which is that you get a shitstorm of writes to the datastore as all the VM's flush their buffers, and then they ALL have to wait, both for the quiescing to succeed, then for the snapshot to occur, then for the resume. Because the datastore-level snapshot can't pick and choose which VM to snapshot.
(By comparison, Veeam uses ESXi VM snapshots to manage its tasks, which means that it is doing it one VM at a time. But there's a lot more data being shoveled around. This is not necessarily better or any more desirable, just a different set of evils.)
So then we go off into alternative realms, and it just gets worse.
A) If you're not using NFS and are instead using iSCSI, then you have a vmfs3 or vmfs5 formatted zvol; this is more-crazymaking to recover things from and still suffers approximately the same issues.
B) You can provision a separate ZFS dataset for each VM. This gives you the sort of granularity you need for snapshots, but turns into an NFS (or iSCSI) mount nightmare. Nobody really wants an individual datastore for each VM. It doesn't scale.
Which brings me back to what I was saying earlier in this thread. It's great to have better ESXi support, but it is only incrementally better, because some of the underlying problems are inherently Hard (Big Giant Capital H Hard).