Aw crap! That's not how I viewed this situation either. I thought that by setting a quota that I was reserving free space on my disks. In fact, I'm still not convinced of your answer.No. It means once your dataset hits the quota you'll be in the same situation.
The fix is to manage your disk space and not get that full.
But that's not a good solution. Why can't the FS reserve at least some KB for it's next transaction? Or why can't I reserve space that won't make the filesystem accidentially go dead? When the problem is known, why are there no mechanisms to prevent this?
In fact this means, that you cannot offer network storage from FreeNAS to users or services, especially where you cannot estimate the final size.
Bye! Marco
There are mechanisms to prevent this.
You get an e-mail warning you if your pool is over 80% full. Since performance tanks around that mark, it's a silly idea to fill it further.
No, this is not a mechanism to prevent this. It is a notification.
For me this is a bug, at least in design because a legal transaction leads to an inoperable status.It isn't really a bug, it's a general issue for at least some CoW filesystems. It gets a bit complicated because you always need to be able to allocate new space prior to freeing the old, which means that each potential strategy to protect against this has at least some issues.
, butIt /can/ be done
I disagree with both of those statements. Well, at least in parts.
- ZFS is always announced as a robust solution. Shouldn't a robust FS work in any situation? (Even when performance is low?)
- FreeNAS is announced as low-cost solution. And yes, it fits and is good (I love it). When there is not too much space (limitations of hardware for any reason like cost, free HDD slots etc), it is hard to predict space requirements.
I would think the proper way to handle this "bug" would be for ZFS to check before beginning a transaction. At least once available space drops below some threshold, every transaction would first check if there is at least enough room to complete the requested transaction and increase space by performing a transaction that deletes blocks. If there isn't, it is aborted. Of course the specifics of how much needs to be reserved and what threshold triggers this additional check is left as an exercise for the reader.
Can this issue be cleared by replacing the drives with larger capacity and resilvering?
I do have a shirt and hat from them...