How Does TN Fill Drives?

Zain

Contributor
Joined
Mar 18, 2021
Messages
124
I am in the process of upgrading drives to larger disks in an R720. Once they are all swapped, I will install the old drives in an MD1200 so that when the additional storage is needed again I can just boot that up as well.

My question here is, how does SCALE fill up the drives? I added drives as storage was needed, so as you can see below, the data is not balanced across each drive. Yes, I know, I shouldn't worry about the data being balanced as it will automatically hand itself with time - I'm not worried about that. I am more just curious how the data is written to each of the VDEVs when the data isn't balanced, and there are different size drives in the pool.

1636981300827.png


Is data written to drives based on the percentage of the capacity of each drive/vdev? Or, is the data written so that there is equal bytes written across each vdev?

Thanks.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I shouldn't worry about the data being balanced as it will automatically hand itself with time
Not necessarily... maybe it will if you write enough and wait long enough.

You can use something like this to help it along though...

Or just copying the data off and back on again manually will work too

Is data written to drives based on the percentage of the capacity of each drive/vdev?
No. ZFS is looking to put transactions together in transaction groups which are contiguous on the disk(s), so depending on the size of the requested writes and the available unfragmented free space to put them, many different outcomes can occur (and you shouldn't spend too much time trying to predict it unless you want to become super-familiar with the deep code... it's on github if you want to do that).

is the data written so that there is equal bytes written across each vdev?
All things being equal (having started the pool with all VDEVs present), this is the most likely outcome and also the most desirable for IOPS.
 

Zain

Contributor
Joined
Mar 18, 2021
Messages
124
Not necessarily... maybe it will if you write enough and wait long enough.

You can use something like this to help it along though...

Or just copying the data off and back on again manually will work too


No. ZFS is looking to put transactions together in transaction groups which are contiguous on the disk(s), so depending on the size of the requested writes and the available unfragmented free space to put them, many different outcomes can occur (and you shouldn't spend too much time trying to predict it unless you want to become super-familiar with the deep code... it's on github if you want to do that).


All things being equal (having started the pool with all VDEVs present), this is the most likely outcome and also the most desirable for IOPS.

Again, I'm not concerned about balancing the data. I was just curious how the drives got filled up. I would expect drives to be filled up so that there are equal bytes spanned across each drive/vdev in the pool, and as you mentioned would be performance beneficial, but wasn't sure.
 
Top