Yes, I know that. And I GREATLY appreciate it, and all the rest of the information you included. Believe me, I'm not criticising you at all! I am criticising the ones at Sun who decided that good engineering was too expensive, or tiresome, or hard. And, a little bit, the FreeNAS folk for abandoning UFS rather than gluing the useful (e.g. scrubbing) features to it.
I didn't take your post as criticizing me. I took the stance that you don't want to kill the messenger, but the message sucks.
Quite literally, OpenZFS is doing some great things to make it scale down smaller (to the scale that home users are a bit more tasty to wanting to use it). There were problems that have plagued ZFS for a long time, and only an expert ZFS admin would know not to do them. A great example is the async destroy of datasets.
Say you had a 20TB dataset that you wanted to destroy. Until circa 2013, you literally had to delete all the contents, *then* destroy the dataset. ZFS destroy commands were syncronous before that. That meant that if your dataset had 20TB, you had to delete all 20TB in 1 single transaction. That sucked because you could literally take the zpool out of commission while zfs went looking for all the bits that needed to be cleared. In some cases it took mere hours, in other cases multiple days. Every ZFS admin at Sun knew about this potential problem, and made customers *very* aware of the issue and to "not do that". I've personally seen customers in production destroy a zvol that was 20TB, and then the system stopped all activity while zfs tried to do its cleanup, ultimately locking out all workloads. (Remember that the workloads cannot write to the zpool if the transaction is trying to be closed and is a destroy.) So the customer, thinking he did something very wrong, rebooted the machine. The problem: Rebooting just means that when ZFS goes to mount the zpool it *must* complete that transaction before the mount process can begin. You might give it a few hours, then you hit the reset button again. Unfortunately you have a new problem. Every time you reboot ZFS has to restart, from scratch, the destroy command. So you did yourself no favors by interrupting it. The only solution was to wait it out, no matter how long, or simply give up on ever getting the data off of that zpool. I have personally seen people take production systems and literally make them useless for 5 days because they did exactly this.
Now that Sun is gone and Oracle has closed their ZFS branch, the OpenZFS project created the feature flag "feature@async_destroy". Now ZFS destroys the dataset immediately, then clears the disk space using available I/O (scheduled to not conflict with workload I/O). So if you had a 20TB dataset you destroyed, you wouldn't see the disk space be immediately available. Every transaction (5 seconds by default on FreeNAS) you'd see a little more free space than the previous 5 seconds. Workloads continue doing what they need to do and everyone is happy. I've seen 12TB datasets get destroyed and freed in about 6 hours.
You'll probably not hear about this problem (and it was rarely discussed in these forums) but it absolutely existed, and for a few people, they learned the hard way that ZFS expected you to be a pro at ZFS. That was Sun's expectations, but it gets harder to manage when every "user" must also be "an expert" to have a good experience.
There is lots more coming from OpenZFS in the future that will resolve more long-term issues. Big ones I'm aware of for the future is l2arc compression and l2arc not being discarded on a reboot. Of course everyone wants BPR (Block Pointer Rewrite) as that allows you to defrag a zpool (awesomeness!). But that's something that is looking more and more like "not easily to implement" and "someone is gonna have to rob large bags of money to get the required developer resources to make it all work" with each passing year. Someone (I forget who) even went so far as to say that if they had the required funding now, implementing it could very well be impossible because of technical factors.
The harsh reality is that Sun's ZFS was limited in scope relative to what it is today. We still have some of those issues, and those issues will, for the foreseeable future (and possible for the life of ZFS) exist. Just as Fat32 is limited to 4GB files (and virtually anyone that uses Windows is aware of that limitation among others) ZFS has its own quirks.
Sun's goal was to make a file system, from the ground up, that was supposed to be incorruptible, scaleable to mind-boggling sizes, yet still perform well. With those kinds of things to consider you'll have to make engineering decisions that will make someone, somewhere, unhappy. For Sun, it was relatively easy to do certain things like "expect your ZFS admin to be an expert" because Sun wanted your support contract (and the money). It was easy for them to also say "you won't want to make a small zpool of multiple disks with no redundancy and expect a zpool to continue to function" because that was not their target customer.
ZFS scales up very well (it was engineered to do so for the long-term) but doesn't scale down very well for home users. So either you 'upsell' your server to something that serves ZFS pretty well, or you are simply SOL. :(
I've never had a file server with 32GB of RAM before my current FreeNAS server. I'm planning to go bigger during the summer. The question is whether I'll start off at 128GB or 96GB.