BUILD My first FreeNAS build...Did I do it right?

Status
Not open for further replies.

Hoowahman

Cadet
Joined
Jul 4, 2014
Messages
8
Newbie here! I apologize in advanced because this post is going to be long. I’ve spent the last few days reading through tons of FreeNAS information and I cannot believe how much there is to tinker with and learn. The reason I got to this point is that I’m a huge data hoarder and have lots of things I have been collecting over the years like most of you here.

3 years ago I bought a Sandisk 4bay external RAID enclosure and I have 4x3TB drives in it on raid 5 (OOPS! I thought this was a good idea at the time). I thought raid 5 would be great for me... well one of the drives died and it has been rebuilding for over a week and seems to be stuck at 24% (Maybe this is normal for raid 5 on such large disks…I don’t know). Not only did my drive die but the crappy external enclosure died as well which I replaced a week ago.

So basically I am motivated to create a sweet storage system that should last me for at least 4-5 years that can provide all my network storage data needs. Sabnzbd / Torrents / Plex / Time machine backups etc. I’m not that worried about catastrophic failures taking out the whole storage system like if my house were to burn down. I am looking for reliability but not necessarily high performance. I hope the onboard SATA controllers won’t be a bottle neck for me but I’m just running on a Gigabit ethernet anyway so 100+ MB/s will be sufficient enough for me and I hope to max out my network connection when reading and writing.

I am looking to build this NAS within the next week so I can get my data off that crap external enclosure ASAP and then I will send it back to the retailer to get my money back even if I have a restocking fee. Anyway this is what I’ve come up with so far after reading around the forums and product reviews on retail sites.

Rosewill RSV-L4500 Black Metal Case (Holds 15x3.5” harddrives)
$99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16811147164

SUPERMICRO MBD-X10SL7-F-O uATX
$243 - http://www.newegg.com/Product/Product.aspx?Item=N82E16813182821

Intel Xeon E3-1270V3 Haswell 3.5GHz 8MB L3 Cache LGA 1150 80W
$349 - http://www.newegg.com/Product/Product.aspx?Item=N82E16819116904

Kingston 32GB (4 x 8GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1600 Server Memory
$377 - http://www.newegg.com/Product/Product.aspx?Item=N82E16820239371

SeaSonic X Series X650 Gold ((SS-650KM Active PFC F3)) 650W ATX12V V2.3/EPS 12V V2.91
$130 - http://www.newegg.com/Product/Product.aspx?Item=N82E16817151088

9x Seagate Desktop HDD.15 ST4000DM000 4TB 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Bare Drive
$150 - http://www.newegg.com/Product/Product.aspx?Item=N82E16822178338

7x Seagate Barracuda 7200.14 ST3000DM001 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Bare Drive
$110 - http://www.newegg.com/Product/Product.aspx?Item=N82E16822148844

SanDisk Cruzer Fit 4GB USB 2.0 Flash Drive Model SDCZ33-004G-B35
$6 - http://www.newegg.com/Product/Produ...&cm_re=4GB_usb_sandisk-_-20-171-586-_-Product

Total: ~$2,600

So there you have it. I don’t care about the price though if this gives me a ton of space for years to come with reliability.

Here are some other questions I have about this setup:

Storage:
It looks like the motherboard should be able to control 14 sata drives exactly and that I should flash the onboard controller to an IT firmware for direct access as I won’t need the raid functionality with ZFS. Has anyone done this and does it work well? Is it even needed? Hopefully this won’t bottle neck my network speed.

My FreeNAS storage configuration will consist of 2 pools:
pool 1: 1 vdev of raidz2 consisting of 8x4TB drives = 24TB of usable space
pool 2: 1 vdev of raidz2 consisting of 6x3TB drives = 12TB of usable space
I figured keeping the vdevs on different pools would give me more flexibility to recreate a vdev if I need to in the future and I could possibly move all data from one to the other if I have the space. I am also planning on reusing those 4x3TB drives in my external enclosure but need to transfer the data to my 24TB pool first.
Does this make sense to be cautious with 2 vpools or is that silly and not worth managing the overhead and should just have 1 pool that is extended with my 3TB vdev array?

Should I be concern about performance with the 6x2 raidz2 on the 4TB instead of having 4x2? Research suggests it won’t be a big deal for my setup.

PSU / Power:
I used a calculator here http://extreme.outervision.com/PSUEngine which calculated 499W minimum, 549W needed. So it seems that this 650W PS should do the trick. However since I am planning on having 14 total drives in the system with a offline spare for each vdev, do I need to be really careful about startup amperage and should be searching for something else? Also the PS only comes with 8 sata power cables, since its modular anyway can I add 6-8 more somehow? What do people usually do to power up more than 8 drives in a system?

Other Questions:
For people that have similar hardware how long can I expect a 4TB drive rebuild in a raidz2 array? I certainly hope it won’t take as along as my external enclosure? 24 hrs? 48 hrs? a week?

Also on all my drives I think i'd like to get the extended 3 year warranty from Newegg for $21 extra per drive. I have noticed in the past if a hard drive is working well it seems to die within 2-5 years anyway which would be past the manufacturers warranty. Has anyone done this? Do you think its worth it?

Thank you for reading this far and I really appreciate any feedback you can give!

Hoowahman
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A few thoughts:

Don't use 7200RPM drives. You'll be bottlenecked by your network anyway but still need to get rid of all the extra heat.

That CPU is most likely overkill, unless you want to transcode like there's no tomorrow.

RAIDZ2 is optimal with 6 drives - everything else is currently being debated. 8 drives shouldn't be too bad...

Don't count on 24TB (or 12TB) of storage. HDDs come in pseudo-TB (1 HDD TB = 10^12 bytes vs 1 TB = 2^40 bytes) and you lose a tiny bit to swap and similar reservations.

You definitely should avoid Kingston at all costs. They've been pulling some shady maneuvers lately. Concretely, Kingston is known to be problematic with Supermicro X10 motherboards. Get Crucial/Micron, Samsung or Hynix from the compatibility list.

Those PSU size calculators are mostly full of shit. My typical recommendation (for 5400RPM drives) is 30W per drive + 60W for the rest of the system + whatever you need for exotic cooling solutions. For 16 drives, we're talking ~540W. If you insist on 7200RPM drives, count on 20% more power at spinup, so 650W is the minimum I recommend (perfectly alright with Seasonic - that thing could probably take 750W without a sweat at boot).
To connect all your drives, just add extensions and the like. Look around a bit and you'll find plenty of such adapters (The more daring make their own custom cables).

Rebuilds are typically quoted as taking a couple of hours, so I'd say less than a day (no real experience in this area).

Yes, the LSI 2308 must be in IT mode. Yes, it works like a charm. Yes, it's easy to flash (follow the guide and use the P16 firmware and do not install the boot ROM). No, IT mode does not bottleneck anything - if anything it should be faster. No, none of the storage controllers will bottleneck you in any way. Even if you used the fastest SSDs on the market, the LSI 2308 has enough PCI-e bandwidth to run them all at full speed. The PCH is a bit tighter, but it's still plenty for HDDs.
 

Hoowahman

Cadet
Joined
Jul 4, 2014
Messages
8
Thanks man! Love this feedback and was exactly what I was looking for.

I'm going to keep the CPU to future proof the system. It also gave me $65 off in a combination deal with the motherboard on Newegg.

I've switched my memory to Crucial with 4x8GB Unbuffered ECC DIMMS - http://www.newegg.com/Product/Product.aspx?Item=N82E16820148770

I've also decided to not invest any more money on 3TB and to do the following instead:

pool 1: 1 vdev of raidz2 consisting of 10x4TB drives = 32TB of usable space
pool 2: 1 vdev of raidz2 consisting of 4x3TB drives = 8TB of usable space (still going to use these 7200rpm drives because I have them in my external enclosure)

That hard drive configuration actually ended up costing me the same amount and I'll gain 4TB of space!
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Your pool 1 will only have 29.1 TiB of disk space. And considering that you would not expect from any disk based filesystem to be utilized more than 80-90%, you should plan for 24TiB useable.
 

Hoowahman

Cadet
Joined
Jul 4, 2014
Messages
8
I was reading that a 10x4TB raidz2 array will actually have a lot overhead on it, about 1.3TB of space used. They then created the pool with ashift=9 which is using 512 sectors and space used was only 0.45 TB. Is this not recommended and should I be sticking with 4k?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Please do not mess with the 4k default FreeNAS 9.2.1.6 is using. Trust me :D

That has nothing with my post. With 10 drives in RAID-Z2, only 8 of them have data, and then you have to convert from TB drive manufacturers use, to what your Unix, Linux or Windows systems are using that is TiB. Please read the article in Wikipedia linked from my post.
 
Status
Not open for further replies.
Top