I had seen something similar on a test pool a long time ago.
UFS maintains reserved space on a filesystem in order to reduce fragmentation effects, and it does this by hiding some of the capacity from the non-root user. However, even on a completely allocated filesystem, files can be removed because no allocation of additional space would be involved.
ZFS doesn't maintain reserved space in that manner. ZFS is a copy-on-write filesystem, which means that in general, writes are made on newly allocated blocks. As the metadata involved (I'm guessing the directory itself) has to be updated, there'll be a need for some free blocks for allocation for the corrected metadata, after which point ZFS will free the file blocks (and the old metadata). Danger of COW. This would seem to be complicated further if there are snapshots. As a matter of fact, I can imagine snapshots being a bit of a pain to unwind in a disk-full situation.
It's kind of interesting that you were able to truncate a big file to fix it; this would seem to imply that either there was a tiny bit of space free and that was all that was needed for the COW to work, or that there's a non-COW path through the code that truncation can function through.
I wonder how much space is needed for something like this. Coming from the old days of UFS on disks of a few megabytes, I'd like to think that metadata updates couldn't require more than a megabyte of space even for ZFS, but I have to keep reminding myself that ZFS's motto is "Think Big, Eat Big." And how do you implement something? Creating an "emergency" file won't work unless you can be sure it doesn't get snapshotted inadvertently. Maybe creating a volume or an unmounted filesystem? "zfs create -o reservation=1M -o mountpoint=none pool/.reserved-space" does seem to reduce the available space on a pool but I haven't tried filling a pool with that.
I would also be interested in hearing William's reasoning. FreeNAS has no problems allocating 2GB/disk for swap that shouldn't strictly be necessary. Having your filesystem fill and then having terabytes of data stranded on a fileserver seems bad, and it seems like it would be worth a little space since you can't safely use it anyways.