tl;dr You asked the RAM question. ;) Eric's answer "you should be fine." is on target.
A little more logic. With ZFS ,RAM is used primarily as ARC, which is your primary cache. So the idea is that on frequently accessed reads, we can serve directly from memory and not bother hitting the disks. The intent behind 1GB per TB is that we scale up ram as the pool grows, so that we can store a similar percentage of our reads. This lets us hit the cache (x)% of our reads for a given workload. Remember it is assumed that as that pool is scaling, users, and the number of accesses are also scaling. It is assumed that we NEED to hit the cache as often as possible, and at the very least in the same ratios. With lots of users in a corporate environment this is valid. Home users completely break that model.
How is a home user different? Pretty much the only thing that takes this much space is media. Typically we write once, read once... let it sit for months at a time. Then we access the files randomly. No hope of a dumb cache finding a useful pattern. In addition the files are huge and sequential, and our pool is MUCH faster than the network. So we aren't really using ZFS for it's cacheing properties, primarily it is being used for reliability, and redundancy. Thus we can relax the "rule of thumb". The reason we can't give you a hard guarantee is that your specific workload and usage is unique (but common). It is easy to create a workload that will beg for all the RAM you can afford... even on a small 1TB pool.
Ironically the 32GB limit is probably what pisses me off the most about Haswell E3's. ;)