BUILD First proposed build, suggestions welcome

Status
Not open for further replies.

txr13

Cadet
Joined
Mar 30, 2016
Messages
5
I'm looking for feedback on a proposed build for running FreeNAS for a fairly specific use case.

Case: Norco RPC-2212
Motherboard: ASRock C2550D4I
RAM: 16GB ECC (Crucial CT2KIT102472BD160B)
Additional HBA: LSI 9300-4i
Boot volume: 2x 16GB SATADOM (Supermicro SSD-DM016 PHI) mirrored
Data volume: 3x WD Red Pro 2TB (WD2001FFSX) in 3-way mirror
Power: 500W redundant (iStarUSA IS-500S2UP)

This server is intended to be used for high-reliability offsite data archival. There will be no media tasks. There will be no local shares of any sort. Data transfer to/from the unit will be infrequent, and it's likely that transfers will be handled by SFTP (admin-side) and chrooted FTP/S (client-side). We do not expect to run many (if any) jails.

I expect to add more drives (in groups of three) and more RAM (in line with drive capacity) as time goes on, as used capacity crosses a 70% threshold. (I want to make sure I have enough lead time to add more capacity before I cross an 80% threshold.) Since this is an offsite archival unit, I'm much more concerned with redundancy than I am space efficiency, hence the 3-way mirror setup.

I am, I admit, a complete newb at everything FreeNAS. That said, I have done a fair amount of research and perused the hardware recommendations and forum posts at length. To answer a couple of questions up front:

Why am I not using a Supermicro board? Well, I've seen lots of Supermicro boards cross my bench. Usually dead, and usually with a failed RAID that can't be accessed anymore because something on the board melted or blew up. Supermicro has gotten a reputation in my shop as being a very cheap-quality solution, and it's not something I would rely on. I'm very aware that Supermicro is highly recommended by almost everybody on the forums, and I'm not really sure how to match that with the direct knowledge of the many failed Supermicro machines I've had to deal with. For now, I'm choosing to steer clear of their boards, but I'm open to having my mind changed by a persuasive argument.

Why a 500W PSU? Largely because it offers the right number of molex connectors, while allowing me to keep a spare or two just in case I need another for fans or something that I didn't correctly factor in. (I don't like using Y-adapters if I can avoid it.) I definitely won't be using that full capacity right off the bat, but if I wind up trying to run 12x 6TB (or larger!) drives in future, I might be glad of the extra headroom. But... is even that enough? Should I maybe be looking at bumping it to 600 or 700 watts?

What's up with the LSI 9300-4i? The C2550 has twelve SATA ports, but I'm going to be using two of those for the boot drives, and I'd like to have my data drives be all 6Gb/s capable. That does lead me to a different question... I've read a lot about the Marvell controller chips, and while I haven't seen any recent reports of failures, I'm still wobbling back and forth on the fence about using those ports, versus upgrading the 9300-4i to the 9300-8i. I'd still end up using two Marvell 6Gb/s ports for data drives, but is that going to be a better deal in the long run, versus using six Marvell ports (half my drives!)? Honestly, I've been feeling pretty good about moving forward with the 4i, especially with the availability of a firmware update to "improve Marvell 9230 HDD stability." But then... I'm not the expert here.

Thoughts, suggestions, and general feedback welcome. Thanks all!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
My feeling is that people always overbuy, and don't get anything like their maximum bang-per-dollar, being lulled into spending money on things that don't really benefit them or functionally save them time. In that spirit:

  1. I think a pair of SATA DOMs is dumb. Dumb dumb dumb. Your box is going to be "rarely used", and offsite. You blow away two SATA ports, this is a "data archival" machine (what the hell do you care about saving a few moments during a rare reboot or a system update or whatever). I'd get a mirrored pair of decent nano-form-factor USB thumb drives and call it a day, and give myself the two SATA ports back. But, I argue with everyone about this all the time, and there's a thousand people that have absolutely uncompromising wood for "SATA DOMs" and "hotswap", and don't care what I say on the subject. So, you have my opinion. But I'm pretty sure if you listen to your own arguments (whatever they are) about buying SATA DOMs to boot this box, you'll find they sound ridiculous in the face of freeing up two SATA ports which is materially relevant in your configuration.
  2. Having an HBA in the system is a mega-complexifier. If you're buying drives in groups of three, (if you take my above advice), then you have room enough for your current set of three, PLUS your next one, without even messing around with the Marvell ports. There's no reason to get the extra HBA now. Your "6GB/s-capable" statement is ridiculous. There is no scenario in which you will notice any difference whatsoever, based on this build. I would rethink everything you're saying. There is no configuration on the planet with this board where having "6Gbps/s-capable" for all of your drives even means anything in terms of what performance you will functionally experience, as far as I know. The Marvell ports are probably fine---as we move forward now with Marvell cleaning their act up, and with the drivers on BSD getting updates, and moving now to version 10 base, I'd like to think the Marvell problems are probably behind us.
  3. That means, if you take my above advice, that you have 12 ports without buying other complexifying equipment like HBA's. That's "four sets" of your drives, whatever that means, and as you're only intending to start with one set of drives, I think you're fine. And in any case, if you decide you "just have to have" your HBA, you can ALWAYS buy it later.
  4. Your power supply statement is ridiculous. Buying a bigger power supply because you don't want to use Y-splitter is close to the most ridiculous thing I've ever heard. You don't size your PSU based on connectors---you size it based on the appropriate rail loading, sir. There is no reasonable configuration that you are talking about that would even ***REMOTELY*** bump up against the efficient capacity of a 500W PSU, even when spinning up drives.
  5. Your statement on SuperMicro hardware does not reflect anything even like a reasonable reality. But, it doesn't matter really, the board you've chosen is fine, and if you choose to be anti-SuperMicro, then whatever, as long as you've chosen a recommended alternative, which you have.
So if this were me, I'd ditch the SATA DOMs for 100% certain, and I'd ditch the HBA certainly at least for the time being, and use my 6 ports. Then when the next set of drives was coming, I'd revisit the question of whether or not I wanted to go with an HBA, or try the Marvell ports.

Doing this will save considerable money, and in my view, will make your system far less complex. That's what I would do.
 

txr13

Cadet
Joined
Mar 30, 2016
Messages
5
I think a pair of SATA DOMs is dumb. Dumb dumb dumb. Your box is going to be "rarely used", and offsite. You blow away two SATA ports, this is a "data archival" machine (what the hell do you care about saving a few moments during a rare reboot or a system update or whatever).

Less about saving moments, more about reliability. I had this idea that a DOM would be more reliable than your typical USB drive. But, perhaps that is also dumb dumb dumb. (DOM DOM DOM?)

I'd get a mirrored pair of decent nano-form-factor USB thumb drives and call it a day, and give myself the two SATA ports back.

The other part of this is that... yes, for reliability's sake, I'd like the boot volume mirrored. But this board only offers three USB ports in total (either two onboard plus one via headers, or vice versa). If I use two USB ports for the boot volume, that only leaves me one USB port for both keyboard and mouse if I have to go onsite and hook up a crash cart to the box. I suppose that's easily solved by carrying a USB hub with me all the time, or else stashing one in the cage along with the box.

There's no reason to get the extra HBA now.

I was attempting to get most of the hardware purchased in the initial build, but I freely admit there's no real need to do so. So that's fair.

Your "6GB/s-capable" statement is ridiculous. There is no scenario in which you will notice any difference whatsoever, based on this build. I would rethink everything you're saying. There is no configuration on the planet with this board where having "6Gbps/s-capable" for all of your drives even means anything in terms of what performance you will functionally experience, as far as I know. The Marvell ports are probably fine---as we move forward now with Marvell cleaning their act up, and with the drivers on BSD getting updates, and moving now to version 10 base, I'd like to think the Marvell problems are probably behind us.

Good to know, and thank you!

Your power supply statement is ridiculous. Buying a bigger power supply because you don't want to use Y-splitter is close to the most ridiculous thing I've ever heard. You don't size your PSU based on connectors---you size it based on the appropriate rail loading, sir. There is no reasonable configuration that you are talking about that would even ***REMOTELY*** bump up against the efficient capacity of a 500W PSU, even when spinning up drives.

Granted, of course one bases PSU size on rail loading. I originally started the build with a 460W PSU, and only bumped it to 500W for the extra headroom, and yes, the additional connectors. But then, I did my reading, and looked through the sticky about power supplies: https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/

According to the rough numbers I got, I was looking at something like a 750W PSU, which seemed outlandish to me. Looking over the TL;DR section for my board at the bottom, for a 12-drive configuration, it looked like a 500W PSU might be getting awfully close to the margin of error, at least at peak load. Hence my query about moving to a higher-rated PSU.

Your statement on SuperMicro hardware does not reflect anything even like a reasonable reality.

All I did was state my experiences over the last several years, and additionally said I was open to having my mind changed by a persuasive argument. What I'm hearing from you seems to boil down to, "You're wrong and delusional." So, okay... I guess that's one way to be persuasive. (I was hoping for something more along the lines of information on whether Supermicro had ever had a bad batch of boards, or maybe there's some known gotchas in configuration that beginners could stumble into, etc. etc. I never built the stuff that blew up; I only had to clean up after it did. I'm willing to accept reasonable explanations, especially since everybody else seems to love them so much.)

So... definitely ditch the HBA for now (revisit the question later) and don't worry so much about the Marvell ports... and I'll definitely consider switching from DOMs to USB, even if that means I have to store a hub in the cage with the box.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
DOMs are more reliable than USB. iXsystems switched from USB to DOMs year ago because of this. ;)
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
I will never ever again use USB. I think i have killed about 5 with FN so far.

I'd use an SLC one but they are expensive/hard to find.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I don't know what kind of USB's you guys are using. In the past 3 years I've built about 10 FreeNAS's, and I have never had a USB failure on any of the boxes, except for within the first 24 hours. :)

In any case, @txr13, re-reading my post, my tone is very pompous and annoyed. I apologize for that. I usually make it a point not to post in the wee hours because of this tendency to be unnecessarily disparaging when I am tired. My intent was pure, sorry about the tone.
 

txr13

Cadet
Joined
Mar 30, 2016
Messages
5
My intent was pure, sorry about the tone.

I did say that I read a bunch of forum posts before I chose to create an account here and post myself. I was already steeling myself in expectation of having my selections and reasoning torn into and dissected. :) So your tone matched what I was expecting (and what I generally get from other BSD users in general, interestingly enough). In any case, all's well.

One thing I did remember yesterday. I mentioned two reasons why I was planning on buying the HBA up front, but I completely forgot the other reason for doing so. (And of course, I forgot the reason that made the most rational sense, so I sounded kinda dippy myself.) The Norco RPC-2212 case has three backplanes, with one SFF-8087 connector each. (I'll be using reverse breakout cables to connect the motherboard to the backplanes.) My intent had been to connect one drive of each 3-way mirror to each backplane, to guard against an entire backplane failing and taking out most or all of a mirror. With the HBA, I would also have three different controllers for each mirror, to guard against any single controller failing, or knocking drives out, or any other such hokum. My biggest concern was whether the known risk from using a Marvell controller was great enough to outweigh the cost and reduced redundancy of connecting two backplanes to the LSI HBA.
 
Status
Not open for further replies.
Top