First FreeNAS Build

Status
Not open for further replies.

brandonb1987

Cadet
Joined
Apr 19, 2013
Messages
1
So I want to build a freenas storage system for my home. I want to build a future proof system that I won't have to replace all the hardware in a couple years. I want to just add storage every X years when I need more. I'm looking at the Norco RPC-4224 as a case as it will provide with many years of open storage additions (HDD's) so thats what I'm thinking of for case wise. Only problem is the backplanes which I read here and there don't do well as the QC for Norco is sub-par at best. So 20-24 drive enclosure is my aim (preferably without backplanes if possible as that will be one less thing to potentially die and be irreplacable for the cheap).

My plan to have optimal speed and reliability is put in 3 IBM M1015 (flashed to IT mode) in to control the drives (start with one and add when needed). I will also put in 8 3TB WD Red drives in 1 RaidZ2 array to start and every time add another card with another 8 WD Red in another RaidZ2 array

As far as Mobo, CPU and RAM I'm a bit stumped as I need a Mobo that has 3 x8 slots that are true x8. I have 2 Mobo in mind along with their CPU and RAM but the question is, is it worth getting a server class Mobo with ECC Ram, or is it really worth server class with ECC. ECC is a feature I would like but if its really not worth it based on this ideal design then I can do without.

What are your thoughts/inputs/tips? I'm really just insterested in building something future proof, as when I build the systems that I do I build them to last as long as I can. (My gaming rig is still running to this day and is still able to run most games great for being a 10 year old system). Money isn't an issue as I want to build to last
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You won't find a 20-24 drive case without backplanes. Manufacturing these sorts of things requires that everything has to line up just right or else connectors snap off as you insert the drive. Putting the connectors on a circuit board means that the spacing of the connectors can be engineered to a fine tolerance, and then all that the assembly guys need to worry about is the alignment of the backplane board (or boards). Otherwise you need 24 little paddleboards, and then have to worry about the precise placement and adjustment of each of them. Nightmare.

Seriously, once you get done screwing around with the frustrations lots of Norco users seem to have, you could have bought a Supermicro 846TQ which gives you quality drive trays and a power supply solution too. The backplane in the TQ breaks out to 24 individual SATA connectors.

If you do not actually plan to saturate your I/O subsystem, I will also note that the 846BE16-R920B is an option, it connects the 24 drives to the system via a SAS expander, meaning that the interface to the system is a single SFF8087 (4 6Gbps lanes is ~24Gbps, or enough for 24 drives at 100MBytes/sec each). This would mean requiring only a single M1015 in IT mode to support all 24 drives. However, it does permanently cement your future throughput. This is probably an acceptable tradeoff for a NAS, unless you're running multiple 10GbE or something like that.

Both those solutions are significantly more expensive than the Norco, but you should consider that not having to figure out your own power supply, getting a redundant power supply as part of the deal, a case built out of decent quality steel, trays that don't feel flimsy when inserting/removing them, using gear commonly used in enterprise and datacenter use, etc. all has some value. However, the Supermicro cases are most compatible with Supermicro boards.
 
Status
Not open for further replies.
Top