expanding pool with larger drives

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
hi guys

I got a pool called "tank" at the moment created out of 3 x 4TB IronWolf drives.

so I've got a faulty on board controller at the moment, with 2 ports that not "to be trusted"

Got 3 x 4TB IronWolf drives spare.

First option is to add these additional 3 HDD's to the "tank" pool. I've "heard" mention of zvol's, I also see the little gear, with expand pool option.
How are these used, if they the same size as the disks in the pool, then I understand they completely utilised and the pool expanded from 3->5 HDD's, do they get created into their own raidz grouping and then the avail space added to the pool ? or is the raidz pool itself grown from 3->5 hdd's

If they larger, lets say I want to add 2 x 8TB HDD, is only the first 4TB utilised ? or is the 2 drives configured into a RaidZ group themselves and then the available space added to the pool ?

G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Forget ZVOLs, that's not what you're talking about here.

You have a Pool of one VDEV which I suppose is RAIDZ1.

You can't change that VDEV as it's not currently possible to expand a RAIDZ VDEV by adding disks... maybe in a couple of years, but today, no can do.

What you can do is replace all of the disks in that VDEV with larger disks, which will result in a RAIDZ VDEV of the same number of disks, but with a larger capacity, or,

You can add an additional VDEV to the pool (which would then be striped together with the existing VDEV) which then increases the pool by the capacity of the new VDEV and new data will mostly be written to the new VDEV.

Because of the striping together of VDEVs, a pool is then reliant on all VDEVs to retain its integrity, so if you lose enough disks to break a VDEV, the whole pool is dead. This usually means that people want to have a pool of VDEVs of the same type.

If you add differently sized disks into the same VDEV, all disks take on the capacity of the smallest member disk in that VDEV so capacity of 8 + 8 + 4 is the same as for 4 + 4 + 4.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
Hi sretalla

Thank for the response, if I can unpack some of it please.

so when I remove a drive and then replace with a larger disk then I might end in a configuration thats 4+4+8.
so to really increase a current pools size i would then do this 3 times, to remove the 4TB drive and replace with a 8TB moving from x 3x4 -> 3x8.

when I just go pool expand, what does it do, as I don't have spare disk in the machine at the moment I'm assuming it's not asking me to expand onto what drive, guessing thats where it would then create a 2nd vdev with the additional disk and then add that vdev to the pool.

off topic, I don't assume a pool can be renamed?

G

Forget ZVOLs, that's not what you're talking about here.

You have a Pool of one VDEV which I suppose is RAIDZ1.

You can't change that VDEV as it's not currently possible to expand a RAIDZ VDEV by adding disks... maybe in a couple of years, but today, no can do.

What you can do is replace all of the disks in that VDEV with larger disks, which will result in a RAIDZ VDEV of the same number of disks, but with a larger capacity, or,

You can add an additional VDEV to the pool (which would then be striped together with the existing VDEV) which then increases the pool by the capacity of the new VDEV and new data will mostly be written to the new VDEV.

Because of the striping together of VDEVs, a pool is then reliant on all VDEVs to retain its integrity, so if you lose enough disks to break a VDEV, the whole pool is dead. This usually means that people want to have a pool of VDEVs of the same type.

If you add differently sized disks into the same VDEV, all disks take on the capacity of the smallest member disk in that VDEV so capacity of 8 + 8 + 4 is the same as for 4 + 4 + 4.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
From my perspective, it would have been nice to create a new vdev (say for example as raidz2 and then add this vdev to the current pool. thus expanding the pool with a self contained highly protected vdev.

similar to large enterprise SAN's, a SANs volume is striped over multiple Raid5 or 6 volumes,

to confirm/check.... what does storage/pool/gear/expand pool do?

G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
From my perspective, it would have been nice to create a new vdev (say for example as raidz2 and then add this vdev to the current pool. thus expanding the pool with a self contained highly protected vdev.
A pool doesn't work that way. VDEVs aren't self-contained, they are integral to the pool. The redundancy level is on the VDEV if that's what you meant, but having 2 VDEVs of different redundancy in the same pool is like locking your front door and leaving all the windows wide open.

You can indeed use the Add VDEV option to add a VDEV if you wanted to do that.

what does storage/pool/gear/expand pool do?
Nothing useful in most scenarios. It's a manual trigger for something that will happen automatically at pool creation or after replacing the last disk in a size increase activity (expanding the pool capacity to fill all free disk space). If you run it, you will see no change to anything.

similar to large enterprise SAN's, a SANs volume is striped over multiple Raid5 or 6 volumes,
Not sure what you're meaning to say here other than RAIDZ2 is a good thing. (which the forum agrees with... if you want a good balance between redundancy and capacity together with throughput, but not IOPS)
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
Re ...
<quote>
Not sure what you're meaning to say here other than RAIDZ2 is a good thing. (which the forum agrees with... if you want a good balance between redundancy and capacity together with throughput, but not IOPS)
</quote>
what is was saying would have been nice if a pool (similar to a meta volume on a SAN) was based as the sum of 1 or more vdev’s (similar to a raid group) Similar to a enterprise San where a lun (similar to our dataset) is then created on the meta volume. Where you can increase the meta volume size by adding more raid groups,
You’re going to have a similar setup with TrueNAS scale which is based on Hyperconverged architecture where the volume size can be expanded by adding more nodes, widening the stripe,

maybe saying, with TeueNAS being used by allot of home users... where we don’t have the option day 1 to simply by all same size drives as required for next 5 yr growth, a weak point in seeing at the moment is the ability (And current complexity) of expanding a pool, this the space avail to a hosted dataset.

will go dig a bit more through the DOCs.
G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
what is was saying would have been nice if a pool (similar to a meta volume on a SAN) was based as the sum of 1 or more vdev’s (similar to a raid group) Similar to a enterprise San where a lun (similar to our dataset) is then created on the meta volume. Where you can increase the meta volume size by adding more raid groups,
That's exactly how it does work.

maybe saying, with TeueNAS being used by allot of home users... where we don’t have the option day 1 to simply by all same size drives as required for next 5 yr growth, a weak point in seeing at the moment is the ability (And current complexity) of expanding a pool, this the space avail to a hosted dataset.
Using Mirrored pairs as VDEVs allows you to expand 2 disks at a time

You’re going to have a similar setup with TrueNAS scale which is based on Hyperconverged architecture where the volume size can be expanded by adding more nodes, widening the stripe,
I don't think you understood that right. You will be able to replicate storage between nodes, but that won't increase the total available space. The scaling out is more about compute.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
Ok... so maybe it’s just mis comm here then... :)

my tank pool is made up of one vdev at the moment (raidz) based on 3x4Tb.
Am I then right in saying you are Implying I should be able to add another vdev to tank, also based on 3x4Tb to the same pool, or even 3x8Tb.
G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Am I then right in saying you are Implying I should be able to add another vdev to tank, also based on 3x4Tb to the same pool, or even 3x8Tb.
Yes.
Using the Add VDEV option from the cogwheel.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
You previously hinted in a scenario where I have 3 x 4Tb drives, I could remove one and replace with a 8Tb drive, would this complete 8Tb be used (implying I can do this 3 times to migrate from 3x4Tb to 3x8Tb) , or only 4Tb?
G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You previously hinted in a scenario where I have 3 x 4Tb drives, I could remove one and replace with a 8Tb drive, would this complete 8Tb be used (implying I can do this 3 times to migrate from 3x4Tb to 3x8Tb) , or only 4Tb?
Until you replace all of the disks (in that VDEV) with 8TB, they will continue to expose only 4TB each. When the last rebuild completes, they will expose 8TB each.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
sweet, and I guess thats when that expand feature then kicks in.

G
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
... with he swop out process above I can understand that we grow the Pool and data is striped across all disks now evenly.

With the adding of a vdev to a current pool, I assume it works more like the pool is extended and once the first vdev is full then data will go onto the 2nd, aka there is no oh, i see a new vdev, let me rebalance the data over the now 2 vdev's.

G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
With the adding of a vdev to a current pool, I assume it works more like the pool is extended and once the first vdev is full then data will go onto the 2nd, aka there is no oh, i see a new vdev, let me rebalance the data over the now 2 vdev's.
It's not clear cut, but as a general statement, ZFS seeks out contiguous blocks to write transaction groups to, so more of those will be found on the empty VDEV than on the partially filled one, so more writes are likely to land there, but there is no automatic rebalancing at all, but you can use a few tricks t make it happen if that's something you really want (perhaps a bigger mess that way, but might be better IOPS with a more balanced pool).

Of course you can always backup, destroy pool, create new empty pool, restore data to get to a perfectly balanced pool with a new number of VDEVs.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
... :) thank you for entertaining my questions, your time is appreciated.

Got a better understanding of things now, have allot more questions, and "wishes" but all good for now, for now... ;)

G
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
... stupid question really... the following, my HDD's are 4TB IronWolf's... but when they are configured into TrueNAS they only show as 3.64TB, thats an expensive 400GB... missing.
why/where is my 400GB... is it always a fixed percentage of the disk.
G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
my HDD's are 4TB IronWolf's... but when they are configured into TrueNAS they only show as 3.64TB
You're mixing terminology...

4TiB and 4TB aren't the same thing.

 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
Whew is this online storage calculator ?


Missing storage
Hi, I've been reading several threads on missing storage, but none of them (not including parity in calcs) seem to cover my scenario. I have 6x 3TB drives, with a single volumne in RAID-Z2. The online storage calculator says with this config says I should have 12 TB of usable space...
www.truenas.com
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
any chance you have a good reference document that covers ZFS and how it works on TrueNAS?

G
 
Top