Increasing the free space helps both immediately and in the long run with the fragmentation performance issue. Immediately, the system stops having to work so durn hard to find a contiguous run of blocks to use for the operations that are being written. In the longer term, you may start to reclaim locality benefits especially on a highly fragmented pool. When doing things like OS updates, and you're writing out a new file on a VM, if you're writing 1MB and ZFS allocates contiguous blocks for that, that's very good, even though the blocks are obviously not contiguous within the context of the blocks making up the virtual machine disk. The reality is that a VM is not terribly likely to read the blocks immediately preceding that file and then continue on to reading those blocks (a la sequential traversal of all disk blocks) - this would probably incur a seek because the new updated blocks are elsewhere, not contiguous. But this doesn't really matter because the file itself is contiguous, so when someone runs that program or opens the file, minimal seek activity gets you that data quickly!
The real problem with VM service is that you can get a lot of shred-dy behaviours. Think of inodes. If you leave on the option to update file atime, every time your VM reads a file, the inode for that file gets updated, and that means (at the ZFS level) a small block of data needs to be allocated and another small block is freed. Especially for things like source code trees, this means that a real mess is made on disk, especially as the files are read many times and updated and all that. Sooner or later you end up with lots of little blocks allocated all throughout the available free space on the pool, and no large chunks of space to allocate. So then when you need to write a large file, it involves more than one region of space. And when that file is ultimately freed, you're not left with a single large chunk of space, but still two smaller ones. Meanwhile the file that replaced the large file struggled to find space, and got broken up into three chunks, and in the meantime the two smaller chunks of space that were freed got allocated to some other smallish chunks of data. This merry go round doesn't stop until the system finds an unhappy balance of some sort.
So to maintain write performance, we basically pull a nasty trick on it: we make sure it has PLENTY of space for contiguous writes. This doesn't actually eliminate fragmentation, but it does mean that you're less likely to go madly seeking all over to get a run of blocks that the VM is likely to request (such as a file).