Hello and how are ya?

Status
Not open for further replies.

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
Hey Everyone,

Just got approved for my account. I've been reading for the past week and a half or so and ordered pretty much everything except the 5x6TB WD Reds I'll need for my NAS last night. I'm going to be using the following hardware:

Motherboard/CPU: SUPERMICRO MBD-A1SRi-2758F-O w/ Intel Atom C2758
Memory: 4x Kingston 8GB 204-Pin DDR3 SO-DIMM ECC Unbuffered DDR3 1600
Storage: 1x Crucial BX100 2.5" 250GB, 5x WD Red WD60EFRX 6TB & 8GB Sandisk Cruzer for FreeNAS
Power Supply: SILVERSTONE ST50F-ESG 500W
Case: Fractal Design Node 304


I've designed this so it's a bit overkill for supporting the ~20TB of useable storage it'll currently have, but so I can upgrade it down the road with minimal cost. I'll have everything installed in the box by next week, minus the hard drives. The hard drives will probably be a few weeks behind the rest of the parts, but with my schedule that's how it has to be. Ready for the sad part? For the time being this box will only be accessible via WiFi. It might ended up staying on WiFi, which should be fine since it's only doing streaming, and file backup for the most part.

I end up traveling extensively around the country and some internationally, so I'll be setting this up for remote access, as a Plex server, time capsule, VPN and usenet box.

Thoughts?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So is that 5 x 6TB Reds in RAIDZ2? Do be aware that ZFS gets grumpy and slow once you fill it past perhaps 80-90% (the rule's 80). so if that were RAIDZ2, I'd say that the pool size would be ~18TB and the usable comes out to more like 14TB. Now would be a great time to add another 6TB drive to the mix in order to get 20TB of fully usable space.
 

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
So is that 5 x 6TB Reds in RAIDZ2? Do be aware that ZFS gets grumpy and slow once you fill it past perhaps 80-90% (the rule's 80). so if that were RAIDZ2, I'd say that the pool size would be ~18TB and the usable comes out to more like 14TB. Now would be a great time to add another 6TB drive to the mix in order to get 20TB of fully usable space.

I've been debating this for a while. RAIDZ2 is obviously the safer path, especially with expansion in the future. My numbers were based off RAIDZ1, which probably isn't the best idea, but I still have time. My case only supports 6 total drives, so with the SSD for cache/jails I don't have more space without replacing the case or adding a JBOD and external enclosure for the additional drives.

Looking at the case I could probably mount the SSD to the top of the case if I drilled a few holes, but the motherboard only supports 6 drives, so I'd need to buy a a RAID card to add the extra drive, at which point the build price just went way up.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We don't really suggest RAIDZ1 for the larger drives, because the time to rebuild and stress involved in rebuilding will sometimes cause a second failure. That having been said, if you had backups of whatever was important on the NAS (which you should have anyways), it would merely be an inconvenience if something went wrong.
 

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
We don't really suggest RAIDZ1 for the larger drives, because the time to rebuild and stress involved in rebuilding will sometimes cause a second failure. That having been said, if you had backups of whatever was important on the NAS (which you should have anyways), it would merely be an inconvenience if something went wrong.
Yeah, I know I need to go with RAIDZ2, but my inner gambler wants the extra space. I doubt I'll fill 17TB in a few years before I can move it to another case and add more drives, so there's probably no real world cost to me to do RAIDZ2 now, since I can't later without moving everything off of it.

NAS will be a backup, most files will be elsewhere, but the media would be the hardest thing to source again and just takes time to download and reimport into Plex.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, you could maybe do something a little hacky. Go six drives RAIDZ2 and then use the available PCIe slot to host an M.2 adapter and then use something like a Samsung XP941 for your SSD. This is a AHCI, not NVMe, option. This gets a little pricey as a way to work around a lack of SATA ports but gets you an extra option to contemplate. With 32GB of RAM you do have sufficient ARC to support a small L2ARC, so splitting up your SSD into maybe two 128GB partitions is possible. Do note that splitting drives into partitions isn't a supported configuration and you're a little bit "on your own." I've linked to components that are actually proven to work (we have these in our VM filer here) but there may be cheaper options.
 

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
Well, you could maybe do something a little hacky. Go six drives RAIDZ2 and then use the available PCIe slot to host an M.2 adapter and then use something like a Samsung XP941 for your SSD. This is a AHCI, not NVMe, option. This gets a little pricey as a way to work around a lack of SATA ports but gets you an extra option to contemplate. With 32GB of RAM you do have sufficient ARC to support a small L2ARC, so splitting up your SSD into maybe two 128GB partitions is possible. Do note that splitting drives into partitions isn't a supported configuration and you're a little bit "on your own." I've linked to components that are actually proven to work (we have these in our VM filer here) but there may be cheaper options.
I completely forgot about that option and haven't seen one of those in years. I'm getting back into tech after years of hobbies being elsewhere. I could probably cut $150 or so off that and get an eSATA card and an eSATA enclosure for a SATA drive and still get decent performance.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We don't really recommend the eSATA enclosure route; it's better to have a unified power source for your system.
 

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
We don't really recommend the eSATA enclosure route; it's better to have a unified power source for your system.
Of course. I'll probably just end up going with RAIDZ2 and eating the storage loss until I can move the system to a bigger case and add additional drives in a few years.

Thanks for the feedback!
 

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
Finally got all the pieces installed today to realize when I moved 5 months ago I didn't bring my VGA cables from California. Guess memtest is gonna have to wait for another day. =(
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Just use IPMI.
 

PainCorp

Cadet
Joined
Oct 25, 2015
Messages
9
Just use IPMI.
I had to go home anyways, probably a good idea to have a VGA cable and keyboards at my new place anyways.

Memtest86+ has been running for 5 hours with no problems so far. Probably gonna start ordering a drive or two net week to make sure I get ones from different batches.

Been reading up on Crashplan for a backup option. Love the price, hate the headache. Does the "local" machine need to be on all the time in order for FreeNAS to be backing up, or is the local machine only needed to config Crashplan?
 
Last edited:

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
I had to go home anyways, probably a good idea to have a VGA cable and keyboards at my new place anyways.

Memtest86+ has been running for 5 hours with no problems so far. Probably gonna start ordering a drive or two net week to make sure I get ones from different batches.

Been reading up on Crashplan for a backup option. Love the price, hate the headache. Does the "local" machine need to be on all the time in order for FreeNAS to be backing up, or is the local machine only needed to config Crashplan?
The local machine is only for configuring.
 
Status
Not open for further replies.
Top