I'm just not clear...

Status
Not open for further replies.

dtrobson099

Cadet
Joined
Mar 30, 2012
Messages
3
I've been Googling/searching/reading for two hours now, and it's probably because I'm not asking the right questions, but I can't grasp some of the nuances of FreeNAS vs. unRAID.

I've been using unRAID for a month or so now, and I had a drive failure... or SMART errors. I don't really understand that stuff - but rudimentary Googling on my situation convinced me to RMA the drive. That didn't really solve the problem and now I can't access my user shares. I read the unRAID forums and see that it's been a LONG time since they released an updated version (for paid software) and I really don't want to give an outfit like that any more money when I expand my array. Hence my curiosity in switching to FreeNAS.

The reason I was using unRAID was because I have a 10 drive tower and only 6 2TB drives ATM. I was planning on using ZFS, but the thought of trying to migrate a 10TB array makes me shudder. I'd love to use RAIDZ1 or RAIDZ2, but expandability is a problem. I realize much has been written on the subject, and I think I have a strategy based on this, but I want to be sure my theory is sound.

For the time being, I would like to fill my case with cheapo 20GB hard drives just to get the array up to 10 drives. Later, as 3TB drives and 4TB drives become available and the prices work their way down, I'd like to replace these drives to expand my array.

I want a "one-stop shop" for my files. I love the user shares experience with unRAID. Everything sits inside my NAT firewall at home, so I have zero need for additional security. I've been reading up on FreeNAS, but without actually messing around with it, it's hard to understand if they have a similar ability. Or, since it's RAID, is it just one big drive and I break it into folders? That's fine too.... actually I probably prefer that. I do apologize in advance for the stupid questions - it's just hard for me to visualize based on what I'm reading so far.

I've noticed that recommendations are to split the array when I've got 10 drives... I really would rather not...

Ultimate goal here is an array of 10 4TB drives with 2 drives assigned to parity, and maybe one assigned to caching if performance is that bad. At that point I would hope that I have enough, but I suppose if it wasn't I could always swap out 4TB drives for 6TB or 10TB drives that far into the future.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
If you're going to be using 12 drives, don't think 10 drive array plus two parity drives, think 12 drive array. No single drive is dedicated to parity, they all share the load equally.

I've seen a lot of warnings against using such a high number of drives in a single Vdev. ( A Vdev being a group of physical discs. ) Using that many discs in a Vdev creates issues - among them, very long rebuild times. You do not want long rebuild times. If another drive fails during rebuild, your data can be gone. There are other issues with such large Vdevs, issues you should definitely research before embarking on such a plan.

The standard recommendation would be to have multiple smaller Vdevs placed into a pool. Once multiple Vdevs are in a pool, the data will appear as though it is a single repository.

So with 12 drives total, the most common layout would be 2 pools of 6 drives each or 3 pools of 4 drives each. Then you'd have to decide on how much redundancy you'd like.

RAIDZ-1 sacrifices one drive's worth of space per Vdev (for parity) and allows a single drive to fail in any Vdev
RAIDZ-2 sacrifices two drive's worth of space per Vdev (for parity) and allows two drives to fail in any Vdev. Keep in mind this sacrifice is averaged drive-space, not a physical drive.

Since you only want to sacrifice two drive's (out of 12) worth of space to parity, that would suggest 2 Vdev's of 6 drives each, each Vdev being RAIDZ-1.

The upside is that you maximize storage volume, the downside is that losing two drives from any single Vdev will terminally threaten the entire pool. Also consider that when you are resilvering (rebuilding) after a drive failure, another drive failing in that same Vdev during the resilver would also terminal threaten the entire pool. With 12 drives, it's not "if" a drive fails, it's when.

Your final decision depends entirely on your use case. Do you need uptime at all costs? Do you have offsite backups? (RAID is redundant, but it isn't backup, not even a little.). Are you just storing things you could reacquire if necessary (movies, music and TV) or is it irreplaceable unique data? If it's the latter, an offsite backup strategy should be the highest priority.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Actually with 12 drives I would do ONE pool with two vdevs of 6 disks in raidz2, then you can lose 2 disks in either vdev and still not lose the pool. 6 disks per vdev in raidz2 is the optimal number, 5 disks per vdev is the optimal number for raidz1. But the chance of losing a second disk during a resilver is definitely higher and if you value your time and data I'd go for z2, as well as have a back up because "shit happens"....
 

dtrobson099

Cadet
Joined
Mar 30, 2012
Messages
3
Actually guys - I'm only using 10 drives total. Guess I wasn't clear on that. Using 4 drives for parity sounds ideal, but then if all my drives are 2TB, I only have 12 TB of storage - which isn't really even close to enough. Since the bulk of my data is going to be replaceable, I'd probably opt for less redundancy.

That being said, I'm realizing my original idea won't work, since adding 20GB drives to the array will just make each drive's useful space 20GB, making my 2TB drives worthless. I'd really like to switch to FreeNAS, but it seems like with only striped RAID options, it's going to be a much larger up-front investment to get where I need to be for a long term solution. Or maybe I'm still not understanding it's capabilities.

I'd like to back up, since my theory doesn't seem to be correct at this point. You guys understand what I want from my original post. Eventually I want 10 drives with 16+ TB of storage. This means at some point I'll have to swap in 4's for 2's and so on.

Right now I have 6 2TB drives - with space in the case for 4 more. The thing I would like to avoid is having to store for example Movies on separate volumes. So I'd like all my movies to be in a Movies folder, regardless of how large my collection gets. It's probably stupid that this is important to me, yet it is. Write speed is not a huge concern, but slow speeds can be annoying.

How would you guys suggest I proceed?
 

b1ghen

Contributor
Joined
Oct 19, 2011
Messages
113
If the data is not absolutely critical I would go with a pool consisting of one 5 drive RAIDZ vdev initially (4 drives of storage space) and then you can add another 5 drive RAIDZ vdev to your pool when the time comes, this will double the size of your pool (to 8 drives of storage space) and keep all the data in the same place so no need for separate volumes. Sure you don't utilize the 6th drive at first but it can be used as a spare drive if one breaks (no use having it as a hot spare since that feature isn't working as of now that I understand.)
Keeping with 5 drive RAIDZ is also potentially optimal for performance like protosd said.
 

dtrobson099

Cadet
Joined
Mar 30, 2012
Messages
3
Ok - I think I see where you're coming from. I'm sure all this will become more apparent to me when I actually start using this as well.

Let me trace through this: I would set up my array as RAIDZ1, using 5 drives initially... effectively giving me 8TB of space. This could get me through 4-6 months, but not much longer. At that point, I need to buy 4 more 2TB drives all in one shot, and add another vdev to the pool. This would effectively increase the size of the share to 16TB. In the future, though, upgrading size would mean I'd need to replace drives 5 at a time... so when 4TB drives come into the picture, I could replace all the drives in the first vdev and increase my storage to 24TB. Then later on, do the same again and increase to 32TB, etc. Am I understanding this correctly?
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
In the future, though, upgrading size would mean I'd need to replace drives 5 at a time... so when 4TB drives come into the picture, I could replace all the drives in the first vdev and increase my storage to 24TB. Then later on, do the same again and increase to 32TB, etc. Am I understanding this correctly?

Mostly. You wouldn't have to replace 5 drives at one time, though you'd only realize the extra storage after all 5 drives in the Vdev had been replaced with larger drives.

For instance, if one 2TB drive in your 5 drive Vdev failed, you could replace it with a 4TB drive but you wouldn't see the extra storage, yet. Once the remaining 4 drives in the Vdev had been upgraded to 4TB (or larger) it would automatically (as I understand it) make available the extra space.

To better understand ZFS basics, I cannot recommend the following videos enough.
https://blogs.oracle.com/video/entry/becoming_a_zfs_ninja

You can also download them and listen in the car, the visuals aren't nearly as important as the audio.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I would set up my array as RAIDZ1, using 5 drives initially... effectively giving me 8TB of space.

...At that point, I need to buy 4 more 2TB drives all in one shot, and add another vdev to the pool.

If you start with 5 drives in your pool, you need to add 5 more when you add the next batch, otherwise it will be unbalanced and open up other problems.

When it comes time to replace drives in you vdevs, do them one at a time in each vdev. I'm not sure what happens when you replace all fo the drives in a vdev if it expands automatically, or waits until the other vdev has had each disk replaced before expanding. If you expand one before the other, you have 2 unequal vdevs again...

If there's any way you can add a 6th drive in you initial vdev and make it z2, I would do that.
 
Status
Not open for further replies.
Top