BUILD Best way to achieve 14 drives?

Status
Not open for further replies.

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
Hi all,

I currently have a file server running Ubuntu with 6x1TB drives running RAID5 that needs to be replaced. It currently runs Plex, VirtualBox, and serves as my primary file server for all media and critical data.

My goal is to replace the existing server with a new FreeNAS build. With the new build I would reuse the existing 6x1Tb drives but replace with RAIDZ1 or RAIDZ2. I also plan on purchasing 6 more drives possibly 3TB or 4TB. Last possible addition is a couple SSDs to run a couple VMs in VirtualBox.

I'm looking at purchasing at SC836E26-R1200 that has 16 drive bays. My question is what is the best way to achieve 14 drives? The hardware sticky recommends the X10SL7-F motherboard but I can't find a source for it in Canada. Newegg does carry the X10SL7-F-O. With this board would I still need a M1015? Sorry if this is a simple question but I haven't been able to wrap my head around the controller configuration.

Thanks.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I'd do 2x RAID-Z2 vdevs of 7 drives each ;)

The -O is just the packaging of the MB (retail or bulk IIRC), both MB are exactly the same otherwise :)
 
Last edited:

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Newegg does carry the X10SL7-F-O. With this board would I still need a M1015?
nope. that board will handle tons of drives given the right cables and a SAS expander. I currently have 20 disks running off 4 of the LSI ports
 

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
I'd do 2x RAID-Z2 vdevs of 7 drives each ;)

The -O is just the packaging of the MB (retail or bulk IIRC), both MB are exactly the same otherwise :)
Thanks, why would you split it into two separate vdevs compared to one?
 

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
I am now reconsidering the HD configuration.

Would there be any concerns running 10x3TB RAIDZ2 or 6x6TB RAIDZ2? Is that enough redundancy? Or would it be better to consider RAIDZ3 (in which case I would add another drive to either configuration)?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Would there be any concerns running 10x3TB RAIDZ2 or 6x6TB RAIDZ2? Is that enough redundancy? Or would it be better to consider RAIDZ3 (in which case I would add another drive to either configuration)?

There are some good pointers in the thread below that are highly relevant for you to reflect on, to make the best judgement (or aid in getting a judgement - present your reactions and ideas)
https://forums.freenas.org/index.php?threads/advise-on-a-new-build-for-a-video-server.42717/

Cheers /
 
Last edited:

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
There are some good pointers in the thread below that are highly relevant for you to reflect on, to make the best judgement (or aid in getting a judgement - present your reactions and ideas)
https://forums.freenas.org/index.php?threads/advise-on-a-new-build-for-a-video-server.42717/

Cheers / Dice
Dice, I like your signature with the good reads!

I have looked at sizing the raid and what level to use previously and was just looking for some expert opinions. So I have read the link you provided and others on the subject.

My take on it is this...

RAIDZ1 is dead because of increasing size of drives and the possibility of UREs during a rebuild that could doom the array.
RAIDZ2 in theory should protect you from UREs while resilvering when a single drive has failed.

When one drive fails in an array, the chances of another drive failing increases.
In this case RAIDZ3 would be beneficial if you believe that a second drive failure in imminent. The window of catastrophic failure is the time it takes to resilver. So if you have large drives then resilvering will take longer at which point RAIDZ3 could be recommended. With smaller drives resilvering will be faster and the window for a second drive failure during the resilvering will be less.

Conclusion:
1. RAIDZ2 is minimum required to protect against UREs during resilvering of a single failed drive
2. RAIDZ3 should be recommended if using large drives and you expect resilvering to take a long time.
3. There is no common consensus or accepted rule on when to prefer RAIDZ3 over RAIDZ2.
4. I am leaning towards RAIDZ2 with 3TB drives, RAIDZ3 with 6TB drives, and 4TB undecided.

Any flaws in my reasoning?

If I could convince myself that RAIDZ2 is sufficient for 8x4TB, then I would have enough room to add another 8x4TB RAIDZ2 in the future for expansion since my case has room for 16 drives.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I'd highly recommend an extra 4TB drive. Burn it in with the first set. Either mark it as a spare or keep it on the shelf.

Should a drive fail, you've got a replacement on hand. Often we see users running on borrowed time, after receiving email messages regarding SMART errors. Then after it fails, they initiate a RMA. Then the worry about how long they can run on the spare (donut) tire.

Having an extra drive on hand and ready to go is cheap insurance.
 

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
I'd highly recommend an extra 4TB drive. Burn it in with the first set. Either mark it as a spare or keep it on the shelf.

Should a drive fail, you've got a replacement on hand. Often we see users running on borrowed time, after receiving email messages regarding SMART errors. Then after it fails, they initiate a RMA. Then the worry about how long they can run on the spare (donut) tire.

Having an extra drive on hand and ready to go is cheap insurance.
This is good advice and what I do with my current 6x1TB RAID5 setup. My build only allows for 6 drives but I always have a spare ready. Over the years I've had a few drives fail in my raid. Once a drive fails I refresh backups of any critical data, swap failed drive with new spare on hand, start rebuild, and then proceed to RMA the failed drive.

I admit I never actually burned in the spare/replacement drives. Probably should do this in the future.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
3. There is no common consensus or accepted rule on when to prefer RAIDZ3 over RAIDZ2.

Following what seems to go on in the forums, <my> conclusion on the topic is:
Raidz2: without future upgrades in plan without a significant total rebuild. Typically this includes 4-11 drive setups.
Raidz3: with future plans to upgrade, ie the box is set out for a lifespan the next 4-5 years including one substantial upgrade by adding another vdev of ~6-11 drives. This scenario merits evaluating Raidz3.
The first reason is the ever increasing size of hdd's which pushes us towards the end of 2-redundancy drive setups (Raidz2/Raid6) (link in 'Good Reads' -> Why to consider raidz3). The awareness of this plays quite an important role in setting up a good foundation for future upgrades.
The second reason is related. When adding a second vdev to the same zpool, the stakes becomes even higher. Particularly if the drives used for the system today are, ex 6tb drives, and the availible drives at the point of update are 12tb... it would <REALLY> stretch raidz2 thin considering the amount of data to be shuffled while resilvering. At that point -I'd be really happy having a thought through upgrade strategy sitting pretty at raidz3.

Probably there are more thoughts to this, yet at least this gives you an idea of how one can reason on the future plans of a FreeNAS build.

Cheers /
 
Last edited:

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
raidz3 should be considered if you consider upgrading the current box AND plan on adding the new vdev to the existing zpool. But with the current drives I think raidz2 is sufficient for 8x3TB (still debating 8x4TB). The new future vdev could be raidz3 and in its own separate zpool. This is the upgrade path I am considering.

Again, thanks for the great feedback!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The new future vdev could be raidz3 and in its own separate zpool. This is the upgrade path I am considering.
Technically, with some 'extra space for leeway' you'd probably be able to setup a 8xhdd raidz2 now and when upgrading, a new zpool is constructed with 8xhdd raidz3. At that point, migrate the data to the new pool. Then add the older drives as a vdev to this new zpool, in their own raidz3 vdev.
Something along these lines is probably what I'll be looking into doing the next years..
 

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
Technically, with some 'extra space for leeway' you'd probably be able to setup a 8xhdd raidz2 now and when upgrading, a new zpool is constructed with 8xhdd raidz3. At that point, migrate the data to the new pool. Then add the older drives as a vdev to this new zpool, in their own raidz3 vdev.
Something along these lines is probably what I'll be looking into doing the next years..
Thanks, that's a great suggestion. The reason in my upgrade plan I was planning on having the separate zpool is that I don't want to trash the existing drives in 5 years. I would just want to add extra capacity to the existing box and having a new zpool would accomplish that since the new capacity would not necessarily need to be part of one huge pool.

I think I am now pretty much decided on RAIDZ2 8x4TB drives for now and leaving 8 bays open for future expansion.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Cool. FreeNAS builds are all about planning ahead. You're setting yourself up well this far.
On the other hand, at least I get humbled by the thought of buying 7-8x hdds at <once> for <a> upgrade. (ie, it is extremely far from my windows-JBOD-enthustiast background xP )
Though, moving to FreeNAS was a bit of 'entering a new world order' in terms of coceptualizing storage visavi individual bricks of hdd's.

Welcome onboard ;)
 
Last edited:

crimsondr

Dabbler
Joined
Feb 6, 2015
Messages
42
Yes, it's quite an expensive upgrade lol... I actually have two file servers running now, one Ubuntu server running RAID5, VirtualBox, Plex, Transmission, etc and another with Unraid that takes all my mismatched/leftover drives. In total it's about 10TB and completely full lol... So I need some new storage. Adding more drives adhoc isn't really working for me especially I hate the speed of Unraid and my Ubuntu installation is so old I need somewhere to move the data and refresh the build.

I was hesitating for awhile but as I did the research I decided to make the plunge!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
In case u're interested in another newbie-intro read that may contain some additional insights for you to soak in, check this here.
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Just to add something not already said: I'd use RAID-Z2 with 6 to 10 drives and a cold spare, or RAID-Z3 with 8-11 drives and no cold spare. I've chosen the second way because I don't like the idea to buy a drive to put it on a shelf just in case, especially as the drives' prices drop with time, RAID-Z3 offers plenty of time to replace a failed drive (with a bigger one, why not...) without the need to have one on the shelf :)
 
Status
Not open for further replies.
Top