RAID-Z with 9 HDDs

Status
Not open for further replies.

morskipas

Cadet
Joined
May 27, 2011
Messages
8
Hi everybody,

i have a little problem at the moment.
I have 4 x HDD 1TB und 5 x HDD 2TB and i want to create a Raid-z1. All of the storage should be available under one logical volume or dataset.
If i create a raid-z1 and i mark all HDDs, the volume is only 3TB :confused:
If i create a raid-z1 volume with 5 HDDs 2TB, the raid is 8 TB, this is OK.
If i create than a 2nd Volume with the same name and with the 4 HDDs 1 TB, the volume has the same 8 TB, i think it should be 11 TB

How can i configure the 9 HDDs (raid-z1) that i have only 1 big Volume to share it as a windows share.

Any ideas?

Thx
morskipas
 

jafin

Explorer
Joined
May 30, 2011
Messages
51
I don't think you can currently do that via the gui. From the console you should be able to attach a vdev to an existing vdev. So for example you should be able to create a pool with 5x2TB giving you 8TB useable.

And then from the console add another vdev to the pool using the 4 x 1TB.

Code:
#zpool add -f ${poolname} raidz ${disks} 
eg
#zpool add -f tank raidz ad8 ad9 ad10 ad11


WARNING: I noticed when i tried this, that the GUI picks up the new size correctly, but somethings don't work, i.e. viewing list of disks in the pool showed only the original disks... perhaps you can hack away at the sqlite db but 'here be dragons!'
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
This is certainly possible from the GUI, as I've got a single zpool (volume) consisting of two vdevs, the first with 4x2TB/RAIDZ1 and the second with 4x500GB/RAIDZ1. Data is stripped across both vdevs.

To create this, go to "Create Volume", name the volume (eg. tank), select your 5x2TB disks and specify RAIDZ1 then hit the "Create volume" button.

Now, click on "Create Volume" a second time, use the same name as before (ie. tank - this is important in order to "stack" vdevs) and select your remaining 4x1TB disks, specify RAIDZ1 then create the volume.

You should now have a single volume/zpool called "tank", consisting of two vdevs (5x2TB/RAIDZ1 and 4x1TB/RAIDZ1) giving a usable space of 11TB.

The advantage of two vdevs in a single zpool is that it will give you double the IOPS compared with a zpool containing just a single vdev. And three vdevs in a zpool would give you triple the IOPS etc.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
An alternateive is to offload your data and delete your volumes and restart with one big RaidZ1 array. You would have 7TB storage until you upgraded (replace) 1tb drives with 2tb drives. Then you would have 14TB. The advantage of this approach is that you would have better future functionality vs. current gain.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
An alternateive is to offload your data and delete your volumes and restart with one big RaidZ1 array. You would have 7TB storage until you upgraded (replace) 1tb drives with 2tb drives. Then you would have 14TB. The advantage of this approach is that you would have better future functionality vs. current gain.

A single RAIDZ1 vdev with 9 drives is a significant risk. Five or six drives would be my limit for RAIDZ1, more than that and I would use RAIDZ2 or split the zpool into multiple, smaller, RAIDZ1 vdevs with the side benefit of improved performance.
 

morskipas

Cadet
Joined
May 27, 2011
Messages
8
This is certainly possible from the GUI, as I've got a single zpool (volume) consisting of two vdevs, the first with 4x2TB/RAIDZ1 and the second with 4x500GB/RAIDZ1. Data is stripped across both vdevs.

To create this, go to "Create Volume", name the volume (eg. tank), select your 5x2TB disks and specify RAIDZ1 then hit the "Create volume" button.

Now, click on "Create Volume" a second time, use the same name as before (ie. tank - this is important in order to "stack" vdevs) and select your remaining 4x1TB disks, specify RAIDZ1 then create the volume.

You should now have a single volume/zpool called "tank", consisting of two vdevs (5x2TB/RAIDZ1 and 4x1TB/RAIDZ1) giving a usable space of 11TB.

The advantage of two vdevs in a single zpool is that it will give you double the IOPS compared with a zpool containing just a single vdev. And three vdevs in a zpool would give you triple the IOPS etc.

Hello Milhouse,

thx for your support.
I tried it exactly as you described, but without success :-(
As i wrote in my first comment, i created 1 volumen "nas" with my 5 x 2TB and i had 8TB, thats OK
Than i created 1 more volume - with the same name "nas" - with my 4 x 1TB and than "nas" was not 11TB, it was still only 8 :-( no more space
Do you think the problem is only the GUI??

regards
morskipas
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Hello Milhouse,

thx for your support.
I tried it exactly as you described, but without success :-(
As i wrote in my first comment, i created 1 volumen "nas" with my 5 x 2TB and i had 8TB, thats OK
Than i created 1 more volume - with the same name "nas" - with my 4 x 1TB and than "nas" was not 11TB, it was still only 8 :-( no more space
Do you think the problem is only the GUI??

regards
morskipas

Having created the volume with the two vdevs, what is the output of the CLI command "zpool status"?

For instance, in my case it's the following for my zpool/volume "share":

Code:
freenas# zpool status
  pool: share
 state: ONLINE
 scrub: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        share                                           ONLINE       0     0     0
          raidz1                                        ONLINE       0     0     0
            gptid/9919eae2-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
            gptid/9984a486-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
            gptid/99efa051-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
            gptid/9a5fdf5b-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
          raidz1                                        ONLINE       0     0     0
            gptid/b7b496ae-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
            gptid/b8bc1851-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
            gptid/b9ba498d-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0
            gptid/babd1879-92b2-11e0-aa4b-001b2188359c  ONLINE       0     0     0

errors: No known data errors
freenas#


The above volume was created in 8.01-BETA2 with 4K sectors enforced on both vdevs. I've also successfully created the same zpool in 8.0.1-BETA1 and 8.0-RELEASE (these latter two without 4K sectors).
 
Status
Not open for further replies.
Top