I plan to use it for backing up the 5 computers in the house and for sharing files (pictures and other media). That said, I expect space requirements to grow over the next few years and don't really have a 'target'. For now, I plan to start with either 8 or 12 4TB drives.
The builds in your signature,
@Chris Moore, only have 32GB of RAM... from what I've been reading, it's recommended to have at least 1GB of RAM per TB of physical disk space. Seeing as I'm going for high capacity on this build, I don't think that that would be enough - unless the RAM requirements don't scale 1:1 as I've been led to believe. Should one stick to the 1:1 rule for best performance, aim above it, or is 1:1 typically a bit more than actually necessary at larger quantities?
I have one server with 16 and the other with 32 and in both cases it is more than I need.
Let me go over a couple things to make sure you know what I know, and this is based on a lot of reading and my own personal observations from running several servers over the course of the past 6 years. I made some mistakes and learned some lessons the hard way, but I never lost my data.
The demand for memory is for two things:
1st, you need memory to cache your writes. By default, write go to RAM first and are flushed to disk when either enough data is in the write cache to hit the flush threshold or enough time has passed to hit the flush threshold and then the write is actually committed to disk.
2nd, you need memory to cache your reads. This usually is the vast majority of the memory utilization but all it does is speed access to data. The design of this is for a system with multiple simultaneous users. The data that is cached is based upon the available space in memory, the frequency with which the data is read and how recently the data is read. If you are not reading the same data with some degree of frequency, your read cache is useless because you are always having to go to the disks to get the data anyhow. This is the most likely situation in a home use environment, at least in my experience.
That said, you do need a certain amount of memory, however in the case of a home system, with a small number of concurrent users, once you get beyond 16 GB of memory, I would say that it is really not needed to go for 1 GB of RAM for every 1 TB of storage. In my system with 32 GB of memory, I have all that memory because one of my jails is the PLEX server that everyone in the house uses to watch movies and the other is a headless Virtual Box installation with four Linux virtual machines running. Because of all the virtual machines, I would like to go up to 64 GB of memory, but I can get by with what I have.
The amount of storage you are targeting (in the 8 drive system) is actually not that different from what I have. Here is what I am basing that on:
if you go with the 8 x 4TB solution and set it for RAID-Z2, it would give you around 17 TB of usable storage.
if you go with the 12 x 4TB solution and set it for RAID-Z2, it would give you around 31 TB of usable storage.
With the level of redundancy in my system, my storage space is almost 14 TB and I am only using 5.5 TB so I have room to grow and I am only using 12 drives. I can easily add more.
Certainly you can build more storage upfront, but if I were in your place, I would get one of those 24 bay storage units, put six drives in it and setup a single vdev pool and when I need more space, I can just add another vdev to the pool. One of the nice things about FreeNAS is that you can easily expand your existing pool. The rule is that all drives in a vdev should be the same size, but you can have a vdev of 6 drives that are 2TB each, another vdev of drives that are 4TB each and another vdev of drives that are 6TB each all in the same pool. I plan on staying with 2 TB drives for now because they are super inexpensive but there are some advantages to 4 TB drives.
In the end, if you are not hammering your server with a constant workload, like a corporate data center, the RAM cache will probably not give you much performance benefit and the amount of RAM is more accurately based on the amount of usable storage, not the raw disk space. Workload matters, for example, in most home scenarios it would be a total waste to put a SSD in the system to act as additional cache for the log file or any other purpose. You can throw more money at a system if you want to, but it really depends on how you will actually use it. I am super paranoid about loosing my data, so I have everything on one NAS replicated on the other NAS. I could have a complete, catastrophic failure of one of these systems and not loose a single file.
I know that I am a bit long winded, but I hope that I provided some insight.