Help to setup Freenas 8 with 10x2TB Drive.

Status
Not open for further replies.

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
I setup Freenas v8.01 Beta 4 with 10x2TB WD Green Drives. I've already setup single Zpool with 5 drives RAID-Z1. I'm still not sure what to do with the rest 5 drives. There are 2 options:

  1. Create Single Zpool striped across a PAIR of 5 drives RAID-Z1 arrays (vdevs). I read an article that this gives me the flexibility to upgrade 5 drives at a time. For example, I want to upgrade only 5 drives in 1 vdev to 3TB Drive. Can I really do that?
  2. Create Two Zpool with RAID-Z1 consist of 5x2TB each. I'm sure with this config, I can easily upgrade just 5 drives at a time. :)
So which one is a better configuration, and what is the pros and cons of each configuration. Thanks.
 

kashiwagi

Dabbler
Joined
Jul 5, 2011
Messages
35
Hi ctantra,

I suppose your zpool setup would depend on what you are trying to achieve. The big difference I see between the two options you present is that 1 will give you better performance (stripe over the two vdevs), while number 2 will be resilient towards data loss (e.g. if two disks in the same vdev crash, you will only lose that zpool if they are seperate. However, if the vdevs are in the same pool, your entire pool will be screwed.).

Why not go for raidz2 over 10 disks instead of 2 raidz1 5x2TB vdevs. This setup will let 2 drives fail at the same time without down time, while 2 failures at the same time with your options would result in catastrophic data loss.

I think you might get better suggestions if you explain what you want to optimize for (e.g. read speed, large single volume, redundancy etc).
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
Hi ctantra,
Why not go for raidz2 over 10 disks instead of 2 raidz1 5x2TB vdevs. This setup will let 2 drives fail at the same time without down time, while 2 failures at the same time with your options would result in catastrophic data loss.

I think you might get better suggestions if you explain what you want to optimize for (e.g. read speed, large single volume, redundancy etc).

I avoid RAID-Z2 over 10 drives, because I have created one zpool over 5 drives and filled with data already. :)

Yes, I know that option 1 has better performance over option 2. But I wonder that is it true that it gives me the flexibility to upgrade 5 drives at a time? For example, I want to upgrade only 5 drives in 1 vdev to 3TB Drive.
 
Joined
May 27, 2011
Messages
566
I'd go for the 10 disk raidz2. 1 disk of redundancy across 10 disks is insanely stupid. the likely hood of 2 drives failing simultaneously is minute, but the likelihood that 2 disks fail within the time that it takes to get a replacement for the first disk is not.

also the current version of zfs that FreeNas uses does not support expansion of vdevs.
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
It seems I should work hard again, back-uping data of 6TB, then create single zpool across 10 drives RAID-Z2 arrays. Thanks for your recommendation guys...
 
Joined
May 27, 2011
Messages
566
It seems I should work hard again, back-uping data of 6TB, then create single zpool across 10 drives RAID-Z2 arrays. Thanks for your recommendation guys...

Good call, you know no fear like having a disk die and knowing you've got no safeties left while you wait for the RMA to get approved.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
My vote...

5 disks in RAIDZ1
5 disks in RAIDZ1
Stripe them

You get the performance boost of striping across the two arrays AND the redundnancy of RAID5.

If space isn't an issue, but redundancy/integrety is, perhaps consider 2 RAIDZ2 arrays - the write performance penalty of RAID6 should be negated by having the two array's striped.

That being said, 10 disks in RAIDZ2 would be the next best option - since I've not figured out how to do nested ZFS RAID.... :)
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
After a few days considering, finally I agree with you jenksdrummer. Hahaha... Single zpool striped across a pair of 5x2TB RAIDZ1.
 
Status
Not open for further replies.
Top