I haven't expressed myself well. I wanted to say that there still may be free physical space on a pool, that can not be used because of some space reservation set by the big non-sparse ZVOL.
With all due respect I'd say that could be considered as just another example of bad configuration.
The typical problem we've seen over the last four years is that someone creates an X TB pool and then tries to create a Y TB iSCSI device on it, where Y = X. Users tend to assume that they'll be able to use the entire pool capacity as a block storage device.
That used to be a total disaster. It is now less of a disaster because of the kernel iSCSI improvements, UNMAP, etc., but this is basically just turning what used to be an immediate train wreck (in the old days) into a deferred train wreck (with today's improvements). It now degrades and eventually has issues as it fills.
Most new users are going to expect to be able to use up to 100% of their iSCSI device, and sooner or later many will try. If we take that into consideration, then the goal should be to educate people that they should not create zvols that are equal in size to their pools. Relying on people not to fill their zvols seems to be a losing strategy. I'm a little burnt out on that whole thing. But with that context in mind, my earlier reply might now make more sense. I'm not really interested in the clever ZFS possibilities and edge cases. In my experience, most people using iSCSI just need a reliable iSCSI device, and the usual problem is that they under-resource it due to various misunderstandings, and (eventually?) hit badness.
As a counterexample, the new FreeNAS VM storage server I'm working on has 24 x 2TB 2.5" drives in a 2U form factor (48TB raw). Seven vdevs of three wide mirrors, plus three warm spare drives. I expect to see about 5-7TB of actual guaranteed usable iSCSI space out of the device on a 14TB pool. Very conservative, I know.