Regardless of the protocol used to share it (SMB/NFS/iSCSI) filling a ZFS pool to 100% will do Very Bad Things.
This is because of the nature of ZFS as a "transactional" and "copy-on-write" filesystem. What this boils down to is that there's no such thing as a "partial write" - similar to how a SQL or other transaction database won't perform an operation unless the entire transaction can be committed to the DB tables, ZFS won't write a new record, change an existing, or mark one for deletion unless there's enough space to write a new copy of the necessary data and/or metadata indicating as such - and then commit that metadata/pool state change all the way up the tree, until finally changing the uberblock to say "the new pool state is valid."
So if you fill the filesystem to a true 100% full, there's no space for ZFS to indicate "hey, I'd like to delete this 128K record" because it doesn't have a way to keep the pool's "current state" valid/immutable for the past transaction (it can't overwrite or delete in-place) while writing the metadata to say "delete record XYZ" for the "future state."
In regards to the other fill levels - with block storage (or SMB being treated as block-equivalent by serving VHD(X) files) the challenge is that you'll end up with fragmentation. Using all NAND lets you avoid the latency penalty of physical disk seeks, but you'll still likely see some degradation in write performance if you manage to outrun the garbage-collection routines on your SSD and have to write to dirty/partially used blocks. Leaving some free space lets the SSD write into unmapped space which is faster. 50% was Ye Olde Thumbrule for when you'd start to see noticeable pain on spinning disks. NAND you can usually push higher than that, but as mentioned before watch out for the GC routines. Better SSDs tend to be able to push closer to the wall; it depends on their firmware, amount of internal overprovisioning, etc. If your SSDs are Intel DC/HGST/etc you may have no problems until 80%+ - if they're SuperHappyFunBee from the Amazon bargain bin, less so.
But here's what you can do.
ZFS does inline compression very well using LZ4 or ZSTD (former tends to be faster, latter tends to have better compression - test with your dataset!) so you can certainly create a sparse ZVOL that's around half the size of your pool. You're striping 5x480G so you'll get roughly 2.3T usable in the pool, make a 1T sparse ZVOL, and then start loading data on it. Compare the logical size of data that you put on it (VHD allocated sizes) and see what kind of compression numbers you get. If you're getting a relatively conservative 1.33:1 compression ratio, that lets you make another 1T ZVOL and only use a grand total of about 1.5T of actual NAND to hold 2T of VHDs. Well under the margin of error.
Warning, here be potential dragons.
If you get better compression, and/or you're absolutely confident that you won't mess something up you can decide to overcommit storage by adding a third 1T ZVOL (3T logical) and make the necessary blood sacrifice to the compression gods to fit that into 2.25T, just squeaking into that 2.3T physical space. But if you're running a 5-drive stripe you're probably okay with some risk anyways. ;)
Cheers