zfs-pool striping across vdevs

Status
Not open for further replies.

berrywipe

Dabbler
Joined
Mar 11, 2012
Messages
19
Hi guys.
These days I begun experimenting a little with FreeNas and I'm willing to use it in my home system, but first I need to clear my mind about the best configuration for my purposes.
I read some stuff about ZFS and made some test.... nice filesystem even if in my configuration I can write at about 250 Mb/s against 600-650 Mb/s with UFS (almost capping my ethernet bw)
But it's ok, I think the additional features are worth the lost bandwidth.

Now... straight to the point.
From what I understood, by giving the same name to different vdevs, you can extend the size of your z-pool, which is like a virtual device, and you can share it like it was a whole unique drive...
By doing so the system creates a stripe of the 2 vdevs.
what I can't grasp (I didn't find good details about it) is: how is this stripe built.

I mean: when you put 2 drives in raid-0 the stripe is accomplished by putting a block in 1 disk and another block in the other...
What about vdevs stripes? Since you can mix different sizes vdevs the striping must be very different from what is done in raid-0.

And what about data integrity? If you lose 1 vdev you lose the whole pool (like in raid-0) or just a part of the data (like when you lose a drive in a JBOD system)?

If you have some good readings about it, please link :)
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
You can think of ZFS striping the same way you think about RAID0 striping.

Data integrity is the same as well. If you lose a vDev out of a striped pool, the pool fails. That is why you should place redundancy in each vDev.

ZFS administration Guide: http://docs.oracle.com/cd/E19082-01/817-2271/817-2271.pdf
 

berrywipe

Dabbler
Joined
Mar 11, 2012
Messages
19
thank you! I guess data is stripped in asymmetrical blocks across different sized vdevs.... something like that!
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
reading through the documentation, i have a couple of concerns. I had an old debian server running several disks. I slowly migrated all my stuff over to my new FreeNAS box, and after a week, I dropped in my 5 remaining disks that had been in the old debian server. so currently i am running 3 separate vdev raidz configs, all in the same zpool. i have been noticing a few of the 5 disks that i put in last have some errors on them and i would like to pull all 5 of those disks out to get them replaced. it would appear that my only option is to back up my data somewhere else and blow away the pool altogether is that correct?
I cannot remove one vdev out of the pool without losing data?
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
You can't remove a vdev from a pool, that is correct. You could replace each of the disks, 1 at a time w/o degrading the vdev. It would take a while, depending on how much data is on there, but may be a more attractive option than migrating all data off to start over.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
You can't remove a vdev from a pool, that is correct. You could replace each of the disks, 1 at a time w/o degrading the vdev. It would take a while, depending on how much data is on there, but may be a more attractive option than migrating all data off to start over.

thank you. that does help, although, with the errors i am seeing on a couple of these disks, i better hop on it so i dont lose data.
currently i am only using 2.5tb out of 12, and about 500gb of it is backed up elsewhere anyway, as its my more important data.

sorry to threadjack, just thought my issue was similar in nature. that answers my question.
 

berrywipe

Dabbler
Joined
Mar 11, 2012
Messages
19
no problem, that question was perfectly in topic and I wanted to ask it myself, just to be sure...
 
Status
Not open for further replies.
Top