Hard Drive Failure And ZFS

Status
Not open for further replies.

fungus1487

Dabbler
Joined
Jan 12, 2012
Messages
42
Hi guys,

I am considering some scenarios for my setup, how performance will be affected and chance of complete data loss. The setup is...

20x 2TB Hard Disks
2x Rackmount Atom Servers (Can hold 10 HDD's each)

1. Setup two 10 disk vdevs on both machines in RAID-Z1 giving me 36TB usable
2. Setup four 5 disk vdevs, two on both machines in RAID-Z1 giving me 32TB usable
3. Setup two 10 disk vdevs on both machines in RAID-Z2 giving me 32TB usable
4. Setup four 5 disk vdevs, two on both machines in RAID-Z2 giving me 24TB usable

I will be using one machine as a dedicated replicated backup device so I instantly half my usable space, I'm also not a fan of #4 seeing how much hard drive real estate I am losing but is there any reason why I should choose one over the other? What is performance like on a 10 disk raidz1/2 vdev? Is there any real need to go with #2, #3 or #4 if I am replicating across devices anyway? And if anybody fancies crunching the numbers what is the potential for actual complete data loss?
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Best practice says you should have a power of 2 for the data drives in your vdev. In this case, 9 spindles for RAIDZ, 10 spindles for RAIDZ2

I think you would get higher performance if you were to setup 4x 5 spindle RAIDZ (still a power of 2 for data drives) because you'll be striping across more vdevs in the pool.
 

fungus1487

Dabbler
Joined
Jan 12, 2012
Messages
42
Thanks for the reply, I didn't realise about the power of 2 rule for data disks. A quick google turns up more info on this so thank you.

I have another question though, would configuring these vdevs 4 into 2 extended vdevs affect performance?

E.g.

I create 2 vdevs containing 5 drives in RAIDZ called "live" and "backup" I then create a further 2 vdevs with the remaining 5 drives on each in RAIDZ naming them "live" and "backup" to extend the storage capacity of each vdev (effectively doubling its original size). Will this cause the vdev "live" or "backup" to perform any worse than creating a 10 disk vdev to begin with?

Also is this more difficult to manage or transition to another system if I ever upgrade, is it just going to be as easy as removing the drives, reattaching them and them creating the vdevs in the same order?

Thanks.
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I would expect that more (quantity) smaller vdevs in a pool would result in higher performance.

If you migrate to another system, as long as you move all the drives over, ZFS will figure everything out. FreeNAS can automagically import the volume. Shouldn't be a problem.
 
Status
Not open for further replies.
Top