Replacing a Disk - Why does it not increase the Volume Size

Status
Not open for further replies.

wouterpelser

Cadet
Joined
Jul 10, 2012
Messages
8
Hi,

I'm testing various scenarios in a VM and trying to understand the ins and out of ZFS.
I created a bunch of 8GB and 20GB virtual hard drives.
I understand that ZFS is limited to only using the same number of drives that you used to create your original volume (3x8gb Drives)
I understand that if I want to increase the size of my volume I need to 3 add additional drives ? .... will adding 3 additional drives guarantee me redundency ?
So... here is something weird hat I can't find an answer for: Why if I replace drives "simulating a crash" with a larger 20gb drive, the size of the pool never increases ?

Thanks for all the help. I think FreeNas is awesome.
 

sska

Cadet
Joined
Jul 13, 2012
Messages
4
what kind of pool is it? stripe or raidz1?
if you want to expand raidz1 volume, you have two ways:
1. replacing each drive sequentially on a larger disk
2. adding vdev(s) to an existing pool

will adding 3 additional drives guarantee me redundency
NO! if you have raidz1 pool and you add disk(s) - you lose redundancy, you can only add vdev(s)

for example, first create pool
Code:
# zpool create tank raidz1 sdb sdc sdd
# zpool status

  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

errors: No known data errors

now i want to expand the pool by adding 3 new disks
Code:
# zpool add tank sde sdf sdg
# zpool status
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
          sde       ONLINE       0     0     0
          sdf       ONLINE       0     0     0
          sdg       ONLINE       0     0     0

errors: No known data errors


NEVER DO IT SO! You've lost redundancy forever!

The right way is:
Code:
# zpool add tank raidz sde sdf sdg
# zpool status
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

errors: No known data errors
 

wouterpelser

Cadet
Joined
Jul 10, 2012
Messages
8
OK, thanks for that explanation.
With the last example you've increased the size of the volume and maintained redundancy ?
It looks like it always needs to grow in groups of 3 ?

How about if I'm setting up a ZFS mirror with x2 drives 500ea and then want to increase the capacity to x2 2tb drives ?
Should I check the 4K sector checkbox, how does it help me ..... "Bitrot ?"

Thanks for the help.
 

sska

Cadet
Joined
Jul 13, 2012
Messages
4
With the last example you've increased the size of the volume and maintained redundancy ?
yes, it has been expanded by adding a second raidz1 pool, but not a single hdd!

It looks like it always needs to grow in groups of 3 ?
not quite, 3 - is a minimum for raidz1 vdev, let's say 3 or more - is ok

How about if I'm setting up a ZFS mirror with x2 drives 500ea and then want to increase the capacity to x2 2tb drives ?
Should I check the 4K sector checkbox, how does it help me ..... "Bitrot ?"
Thanks for the help.
sorry, with ZFS mirror i don't have any experience, as well as with 4k hdd
 
Status
Not open for further replies.
Top