I have a few other things going on that have an influence on this project. I thought might be worthwhile to mention the bigger picture because I really appreciate the help and feedback and the suggestions are great... I thought giving the whole picture (or more of it) would be better so that if I shoot down a suggestion there would be a reason behind it, etc.
We just purchased 10GbE SFP infrastructure and its turning out to be a blessing and a curse. We were trying to avoid it but we may need some 10GbE RJ45 hardware after all. We are currently using a lot of bonded/teamed/LAG'd gigabit connections and it gotten to be too much. Too much to manage, too much hardware, too much cabling, etc. The switches needed to be larger and therefore (none of this is in a racked/datacenter location) the power consumption and more importantly the noise level became unacceptable. Additionally, there are times when we push data across subnets or into and out of sandbox environments, and therefore through the firewall, which has gigabit interfaces and multiple VLANs.
We easily saturate 3.5-4Gb/s on a regular basis and our max is about 6-7Gb/s. I'd expect that to go up slightly with the new NAS's but not by too much due to hardware limitations on the machines pulling or processing the data.
I'm going to be building 5-6 of these things and keep a cold spare on hand so 6-7 total. Power consumption and footprint is the primary concern. The smaller and the lower the power consumption the better. I also need to upgrade a few pfSense firewalls so if possible, I would like to use the same hardware across the board so that I have to keep minimal spares on hand, etc. I don't mind paying up a few dollars here or there so that it's all the same hardware across the board.
The strongest contender right now is the U NAS NSC-800 chassis using the Super Micro A1SRi-2758F motherboard. The quad nics plus dedicated IPMI is very appealing - and it can support 64GB of ECC DDR3. If this ends up being the one, I will team the quad NICs and run a dedicated SAS 12Gb/s controller from the PCIe slot. It'll cost a bit more but we'll have to buy a few 10GbE RJ45 switches. Additionally (and this is a very bad idea) we could always use USB 3.0 >> Ethernet adapters to bond up (assuming FreeNAS OS has support and drivers for this) to something like 6-8Gb/s.... a bad idea but perhaps the only choice. Alternatively, we could dump the NAS ethernet into some old gigabit RJ45 switches we have with 10GbE SFP uplinks. Not pretty, requires more hardware, still lots of cables, etc. but it might work.
I took a look at the C2750D4I and the E3C224D4I-14S. I'm not wild about either.
The E3C224D4I-14S only supports 32GB RAM, only has two onboard NICs and it's SAS controller is only 6Gb/s. I don't know that I could justify paying up for SAS drives to be limited to 6Gb/s speeds when we are at SATA 6Gb/s speeds currently. I could put a dual 10GbE NIC in there which solves the connectivity problem but having never tested the system, I would guess that 10GbE would only eliminate the need for the gigabit bonding shenanigans I mentioned for the Super Micro motherboard. I doubt we would need 10GbE as the hardware probably could not make use of it.
I had initially considered the C2750D4I motherboard and had planned on posting up to the forum how FreeNAS handled that chipset/individual SATA controllers without a dedicated HW RAID controller. Running through three different SATA controllers seems not ideal to say the least. Additionally, it's all 6Gb/s SATA speeds.
To come full circle, it seems that the Super Micro motherboard is the way to go in the mini-ITX form factor. That still leaves me questions about a chassis the size/footprint of the U NAS NSC-800 or a Drobo or a QNAP, that fits more than eight drives. That Super Micro A1SRi-2758F motherboard has SATA2 and SATA3 ports. With the NSC-800 I'll probably try to fit 2x 2.5" drives attached somehow to where the single 2.5" boot drive would be placed.
If I could find an equivalent of a 10-bay or 12-bay NSC-800 I'd start weighing the difference between filling the extra bays with 3.5" drives to make the primary array larger or to keep the 8x 3.5" primary array and add a second array with inexpensive 2.5" drives. I'd probably fill a 10-bay chassis up with all 3.5" drives while a 12-bay chassis would still be filled with 10x 3.5" drives and a second array of 4x 2.5" drives.
Not sure if what I'm looking for exists but hopefully it makes a little more sense. The chassis needs to stay as small as possible. A large tower isn't an option.
Thanks for the continued help and suggestions.