I'm looking for feedback on a proposed build for running FreeNAS for a fairly specific use case.
Case: Norco RPC-2212
Motherboard: ASRock C2550D4I
RAM: 16GB ECC (Crucial CT2KIT102472BD160B)
Additional HBA: LSI 9300-4i
Boot volume: 2x 16GB SATADOM (Supermicro SSD-DM016 PHI) mirrored
Data volume: 3x WD Red Pro 2TB (WD2001FFSX) in 3-way mirror
Power: 500W redundant (iStarUSA IS-500S2UP)
This server is intended to be used for high-reliability offsite data archival. There will be no media tasks. There will be no local shares of any sort. Data transfer to/from the unit will be infrequent, and it's likely that transfers will be handled by SFTP (admin-side) and chrooted FTP/S (client-side). We do not expect to run many (if any) jails.
I expect to add more drives (in groups of three) and more RAM (in line with drive capacity) as time goes on, as used capacity crosses a 70% threshold. (I want to make sure I have enough lead time to add more capacity before I cross an 80% threshold.) Since this is an offsite archival unit, I'm much more concerned with redundancy than I am space efficiency, hence the 3-way mirror setup.
I am, I admit, a complete newb at everything FreeNAS. That said, I have done a fair amount of research and perused the hardware recommendations and forum posts at length. To answer a couple of questions up front:
Why am I not using a Supermicro board? Well, I've seen lots of Supermicro boards cross my bench. Usually dead, and usually with a failed RAID that can't be accessed anymore because something on the board melted or blew up. Supermicro has gotten a reputation in my shop as being a very cheap-quality solution, and it's not something I would rely on. I'm very aware that Supermicro is highly recommended by almost everybody on the forums, and I'm not really sure how to match that with the direct knowledge of the many failed Supermicro machines I've had to deal with. For now, I'm choosing to steer clear of their boards, but I'm open to having my mind changed by a persuasive argument.
Why a 500W PSU? Largely because it offers the right number of molex connectors, while allowing me to keep a spare or two just in case I need another for fans or something that I didn't correctly factor in. (I don't like using Y-adapters if I can avoid it.) I definitely won't be using that full capacity right off the bat, but if I wind up trying to run 12x 6TB (or larger!) drives in future, I might be glad of the extra headroom. But... is even that enough? Should I maybe be looking at bumping it to 600 or 700 watts?
What's up with the LSI 9300-4i? The C2550 has twelve SATA ports, but I'm going to be using two of those for the boot drives, and I'd like to have my data drives be all 6Gb/s capable. That does lead me to a different question... I've read a lot about the Marvell controller chips, and while I haven't seen any recent reports of failures, I'm still wobbling back and forth on the fence about using those ports, versus upgrading the 9300-4i to the 9300-8i. I'd still end up using two Marvell 6Gb/s ports for data drives, but is that going to be a better deal in the long run, versus using six Marvell ports (half my drives!)? Honestly, I've been feeling pretty good about moving forward with the 4i, especially with the availability of a firmware update to "improve Marvell 9230 HDD stability." But then... I'm not the expert here.
Thoughts, suggestions, and general feedback welcome. Thanks all!
Case: Norco RPC-2212
Motherboard: ASRock C2550D4I
RAM: 16GB ECC (Crucial CT2KIT102472BD160B)
Additional HBA: LSI 9300-4i
Boot volume: 2x 16GB SATADOM (Supermicro SSD-DM016 PHI) mirrored
Data volume: 3x WD Red Pro 2TB (WD2001FFSX) in 3-way mirror
Power: 500W redundant (iStarUSA IS-500S2UP)
This server is intended to be used for high-reliability offsite data archival. There will be no media tasks. There will be no local shares of any sort. Data transfer to/from the unit will be infrequent, and it's likely that transfers will be handled by SFTP (admin-side) and chrooted FTP/S (client-side). We do not expect to run many (if any) jails.
I expect to add more drives (in groups of three) and more RAM (in line with drive capacity) as time goes on, as used capacity crosses a 70% threshold. (I want to make sure I have enough lead time to add more capacity before I cross an 80% threshold.) Since this is an offsite archival unit, I'm much more concerned with redundancy than I am space efficiency, hence the 3-way mirror setup.
I am, I admit, a complete newb at everything FreeNAS. That said, I have done a fair amount of research and perused the hardware recommendations and forum posts at length. To answer a couple of questions up front:
Why am I not using a Supermicro board? Well, I've seen lots of Supermicro boards cross my bench. Usually dead, and usually with a failed RAID that can't be accessed anymore because something on the board melted or blew up. Supermicro has gotten a reputation in my shop as being a very cheap-quality solution, and it's not something I would rely on. I'm very aware that Supermicro is highly recommended by almost everybody on the forums, and I'm not really sure how to match that with the direct knowledge of the many failed Supermicro machines I've had to deal with. For now, I'm choosing to steer clear of their boards, but I'm open to having my mind changed by a persuasive argument.
Why a 500W PSU? Largely because it offers the right number of molex connectors, while allowing me to keep a spare or two just in case I need another for fans or something that I didn't correctly factor in. (I don't like using Y-adapters if I can avoid it.) I definitely won't be using that full capacity right off the bat, but if I wind up trying to run 12x 6TB (or larger!) drives in future, I might be glad of the extra headroom. But... is even that enough? Should I maybe be looking at bumping it to 600 or 700 watts?
What's up with the LSI 9300-4i? The C2550 has twelve SATA ports, but I'm going to be using two of those for the boot drives, and I'd like to have my data drives be all 6Gb/s capable. That does lead me to a different question... I've read a lot about the Marvell controller chips, and while I haven't seen any recent reports of failures, I'm still wobbling back and forth on the fence about using those ports, versus upgrading the 9300-4i to the 9300-8i. I'd still end up using two Marvell 6Gb/s ports for data drives, but is that going to be a better deal in the long run, versus using six Marvell ports (half my drives!)? Honestly, I've been feeling pretty good about moving forward with the 4i, especially with the availability of a firmware update to "improve Marvell 9230 HDD stability." But then... I'm not the expert here.
Thoughts, suggestions, and general feedback welcome. Thanks all!