You're missing out on the most important argument being made here. That the email notications are insufficient. If the email notifications were sufficient then we wouldn't have this thread at all. You'd get emails and you'd take action. The *whole* argument is that emails are NOT sufficient to prevent this condition (which I'm trying to politely say is idiocy). Emailing is the *only* thing you can truly rely on to prevent problems.
Is anyone actually making this argument? I don't think I've seen anyone actually argue that emails are at issue here. But, I'll bite. The emails are helpful, but no they aren't sufficient to prevent the problem. A process might error out and start filling the disk faster than you can reach a terminal to fix the problem. Or maybe the network went down and the email was lost. There are all sorts of reasons why the email isn't sufficient to prevent the disk from reaching 100%. The primary reason why it's not sufficient is that the email message itself doesn't stop the disk from filling. It requires administrator action to do that.
The actual discussion is that however it happened, the disk is full. How do you recover?
If the argument is that you want some way to prevent the pool/dataset from hitting a "full" condition (and ZFS going "solid" and being locked) and you want to use datasets with quotas and/or reservation to do that I just explained why those do NOT work.
No, that's not the issue. The pool is full, how do you recover is the discussion.
If emailing is insufficient then you need to find another file system that will let you be less responsible for the consequences of filling a drive/RAID array.
No, emailing is a feature of FreeNAS, and presumably other systems. It has nothing to do with ZFS. The solutions in this thread are applicable to any system that is using ZFS.
As for your comment for creating a file to explain its there to alleviate 100% capacity there is zero difference between your "use a single file" scenario and a reservation on a dataset. ZFS will treat them the same. Remember, a dataset with a reservation will automatically allocate that size to itself. It is literally immediately "used" disk space take from the pool and given to the dataset. tl;dr you create a 1TB file and a dataset with a 1TB reservation and the result is exactly the same. 1TB is "used" in the pool.
The file is there to explain what the dataset and reservation is for, not to take up space. The file would need to be a paragraph or so. The dataset would have a reservation of 1-20GB or 100GB. Whatever makes sense to preserve enough space for the transactions to free up space.
1. emails are your protection from the pool going solid. PERIOD. If this isn't good enough, take your ball and go elsewhere. ZFS clearly isn't for you.
2. There is no way to "hack" zfs by creating a dataset, file, zvol, etc with any kind of quota and/or reservation to get around this limit.
3. If #2 is unpalatable, see #1.
No one is trying to use hacks to keep the pool from going to 100%. We are looking for ways to recover access to the pool if it does get there.
This is a non-discussion. I can't believe we're on page 3 of posts on this topic any people are still discussing this. We've had this discussion dozens of times on this forum and it never changes...
I'm clinging to the hope that you just haven't read what people are actually discussing. Are you really trying to say people shouldn't be discussing ways to keep the pool useable in the event that it does get to 100%?
Is your position honestly that it's not worth discussing because the pool should never get there because FreeNAS sends email and that's clearly all that's required?
Remember the original argument bro. It wasn't how to allow for "more rapid recovery".
So now that it's about "more rapid recovery" are you ok with discussing that?
**edit** I take this back. The original post was about how to recover from this. Email alerts didn't come in until half way through page two. And that was today. The original post was in 2013 and this topic has indeed been brought back up several times since then. In other words, it's always been about recovery and there have been a few detours along the way.
**end-edit**
A server admin said "but that's not a good solution" referring to an admin having to zero out a file somewhere.
At least in my opinion it's not a good solution because it feels like a bug. ZFS is supposed to be copy on write, I wouldn't expect truncating a file to free up any space. What happens if I truncate that file and then try to restore it from a snapshot? Do I get the data? Or is it gone because I wrote to it from /dev/null? Does truncating the file only free up space when the pool is full? Is that because the metadata fails to write but the truncation succeeds? In every scenario this behavior sounds like a bug and I'd expect it to be fixed.
Reserving space that can be utilized if the rest of the pool hits 100% doesn't sound like a bug and it sounds like a reliable mitigation strategy.