I believe
@jgreco would have a better answer here, but as I recall, some years ago we were seeing a number of threads in which users had experienced unexplained data loss or corruption. The common element in all of them was that they had < 8 GB of RAM--that was the recommended amount even then, but it wasn't recommended as strongly as now. Since strengthening that recommendation and otherwise updating the hardware recommendations, we aren't seeing those kinds of issues very much.
Now, if you're getting data loss or corruption, it's probably a result of a bug--but it seems to be a bug that was only triggered in low-RAM conditions. Maybe that bug has been fixed by now, or maybe it hasn't. Another common characteristic of those data loss scenarios was that the systems in question worked just fine, until they didn't any more.
Basically I had observed, over a long period of time, that there were catastrophic failures being experienced, especially in the 4GB AMD APU community that was popular for the first year or two of FreeNAS 8, and that there were also problems being experienced on other platforms where there was high memory stress. This typically manifested itself as a sudden pool corruption, one fine day, usually when the pool was rather full-ish (probably inducing additional metaslab/caching stress), and at best you could recover a good chunk of the pool if you were patient and worked around kernel panics, and at worst the pool was basically trashed. The problem was worse because most of the people experiencing this were not experienced sysadmins, which also made it hellishly difficult to collect detailed information.
I changed the documentation to introduce the 8GB base requirement plus 1GB per TB of disk, and this has been very successful in avoiding problems, with the lone exception of dedupe, which has different memory requirements, and if you don't meet them, pool import can be a real problem.
When Jordan came along to iX, he criticized me for having made this change without having quantified it (even though *I* had no affected systems), and then tried to cajole me into "figuring it out," to which I pointed out that I'm not an iX employee, and then he grumbled quite a bit about how this needed to be resolved -- but of course did nothing at all about it.
While I do work with servers and networking professionally, I would note that my level of willingness to help on the forums doesn't extend to complex systems analysis and debugging. On the other hand, I'm also pretty good at noticing trends and inferring things. At one point I had a list of some dozens of examples of threads where people had lost pools. We've switched forumware a few times since then and I don't know where that might be now, and I don't care enough to dig.
The thing is, ZFS on 4GB did actually work for a lot of people. But it's putting the system under a huge strain. It's 2019. Memory is cheap. If your data is important, and we assume that you're using ZFS because your data is important, why risk it. My big goal is to make sure people aren't lulled into doing dumb things that cost them their data, so I feel some responsibility to inform people. But I'm also fine with the idea that you're free to do as you wish, and you own the results of your choices.