BUILD Hardware Opinions Please

Status
Not open for further replies.

xiSlickix

Dabbler
Joined
Feb 5, 2014
Messages
47
This will be my first go round with FreeNAS. Yes, I have read through the Noobs PowerPoint (which is awesome, I wish all technical communities could do something like that). This build will be for use in an Graduate research lab where their data consists of a lot of images and video (with minimal / no compression - they think avi is a good container, but I digress).

I'm one of the IT guys here, so they came to me asking about a better way to store data for long term. Their data is related to NIH grants, thus they need to keep their research data for 7 years. They have had one or two usb hard drives drop, hit the floor, and loose data, so they are now finally getting serious about data storage. I think their budget is around $3000.

They were looking for a build with 12 - 16TB of storage out of the box, with the potential to expand at a later time (Thinking RaidZ2 for their first VDev, and adding another later on to the ZPool). Due to our campus having virtually no firewall, and the fact that cryptoware / crytoviruses scare the hell out of me, they will connect via SFTP. I've come up with the following hardware list, and was looking for suggestions / critiques. Any advise is welcome.

The current total for all of this is just under $2500, not counting shipping costs.
FreeNAS-Potential-Parts-List.png

When and if they need to expand capacity, it sounds like the IBM ServeRAID M1015 is the way to go, but for the time being, I think it is out of scope for their intentions.
I see where the SuperMicro boards appear to be a front runner, and I am not opposed to using one, but I do have more familiarity with ASUS boards. If there are any big advantages I'm losing out on (I doubt I'll be able to utilize IPMI in this environment) please point them out. Thanks in advance for anyone who reads this far!
 

xiSlickix

Dabbler
Joined
Feb 5, 2014
Messages
47
Well, I have looked through this one: http://forums.freenas.org/threads/so-you-want-some-hardware-suggestions.12276/ for a ways in. Not that forum search tools are always the most robust, so I tried searching "FreeNAS P8B-X" with Google. Not once on this forum does this ASUS motherboard show up. So, with that in mind, I'll be switching to a SuperMicro MBD-X9SCL-F-O, as I really hate being the only guy with a given motherboard.

I've read a few posts on this forum suggesting that my boot flash drives be in a raid 1 config. Other posts just said to make sure I dd my first boot flash drive to a second one to have a spare on hand. This motherboard only has 1 on board USB port, so should I look into internal USB 2.0 header adapters, or only worry about having a single flash drive plugged in at a time?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Given the intended use of this server in a research lab environment, I'd strongly suggest you lose the HAF912 and find a case (or server) with support for redundant PSUs. Your stakes seem to be a little higher than "I can't watch my DVD rips right now, guess I'll use Netflix."

Re: the boot image, I saw some discussion about dd'ing the drives and I thought the consensus was to simply back up the config and have it ready for restore to the second drive, rather than doing a dd clone and risking peculiarities.
 

xiSlickix

Dabbler
Joined
Feb 5, 2014
Messages
47
Given the intended use of this server in a research lab environment, I'd strongly suggest you lose the HAF912 and find a case (or server) with support for redundant PSUs. Your stakes seem to be a little higher than "I can't watch my DVD rips right now, guess I'll use Netflix."

Re: the boot image, I saw some discussion about dd'ing the drives and I thought the consensus was to simply back up the config and have it ready for restore to the second drive, rather than doing a dd clone and risking peculiarities.


If I can, I'll try to get them to look into buying this instead:
That *should* still fit in the HAF-912 case. That case has great air flow, and I think the professor running the lab had some interest in the box sitting in his office, thus I'm trying to avoid a rack mounted case currently. Though, once this thing gets blowing, that may be a different story.
On the memory, according to Kingston (for what that's worth) the RAM I spec'ed out earlier should be fine, or either of these in the appropriate quantities to get to 32G.
  • KVR16E11K4/32
  • Model KVR16E11/8EF
Also, just wanted to mention that my disinterest in IPMI wasn't from the standard "I don't know what it is, thus I must not need it," that seems to happen a lot with noobs around here, but from a "I don't trust it to not get brute forced on this firewall-less network." The only way I could see actively using it would involve some form of dedicated firewall, or a DD-WRT box between it and the main network, where I would still have to configure SSH access through it just to get to SSH / the IPMI config. Sounds like a PITA, but maybe I'm wrong.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I think the professor running the lab had some interest in the box sitting in his office

This is honestly the best reason in favour of a hot loud rackmount system. Physical access is the security trump card, and I can't imagine the professor's office being more secure than your datacenter. Get a nice big 4U box and load it with 80mm screamers in a redundant push-pull setup, he'll be asking for it to be locked up safe and sound in no time.

Also, just wanted to mention that my disinterest in IPMI wasn't from the standard "I don't know what it is, thus I must not need it," that seems to happen a lot with noobs around here, but from a "I don't trust it to not get brute forced on this firewall-less network." The only way I could see actively using it would involve some form of dedicated firewall, or a DD-WRT box between it and the main network, where I would still have to configure SSH access through it just to get to SSH / the IPMI config. Sounds like a PITA, but maybe I'm wrong.

There have definitely been exploits against that kind of target so you're right. IPMI and other management traffic should absolutely be on a separate network or VLAN. There's serious value in out-of-band management (especially if you end up burying the box in your DC like I suggested) so I would look at working with the rest of your team (campus IT is pretty tight-knit from my past experience) to get it on its own little isolated network.
 

xiSlickix

Dabbler
Joined
Feb 5, 2014
Messages
47
I think I got him to agree to let us rack mount it in the server room. Looking at this guy as a decent budget friendly options, though I will need to get rails for it...
It can handle 15 drives. Our initial build will be 6 drives. Once they figure out how to fill ~16 TB (probably more like ~14.5 really) we'll drop another 6 in with the IBM M1015 card (or whatever is the new hotness at the time) in a second VDev, and they'll be in business. When and if they figure out how to fill all of that, I'll be sad / scared.
 
Status
Not open for further replies.
Top