Data disribution on vdevs in a ZFS pool

Status
Not open for further replies.

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Hi,

I've added several vdevs to our existing ZFS storage over the years w/o issue.

I've always assumed that ZFS will favor writing to the newly added vdevs until data is evenly distributed.

I've also assumed write performance will suffer until data is evenly distributed.

However is it possible to see what the status of data distribution is amongst my vdevs so that I can predict performance?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's certainly possible, at least in a vague way, but I don't know the commands.
 

scwst

Explorer
Joined
Sep 23, 2016
Messages
59

droeders

Contributor
Joined
Mar 21, 2016
Messages
179
Hi,

I've added several vdevs to our existing ZFS storage over the years w/o issue.

I've always assumed that ZFS will favor writing to the newly added vdevs until data is evenly distributed.

I've also assumed write performance will suffer until data is evenly distributed.

However is it possible to see what the status of data distribution is amongst my vdevs so that I can predict performance?

I don't have access to a machine with ZFS on it right now, but I think the command you want is:

Code:
zpool iostat -v


I believe this will show usage of each of the vdevs within all your pools.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I don't have access to a machine with ZFS on it right now, but I think the command you want is:

Code:
zpool iostat -v


I believe this will show usage of each of the vdevs within all your pools.
Don't have any pools with multiple vdevs on hand, but the output I get suggests that it would work.
 

vibratingKWAX

Dabbler
Joined
Oct 28, 2016
Messages
12
@aufalien I would love to see the output of the zpool list -v command of your system!

I dont think its worth to open another thread for this question wich is just out of interest:
If e.g. you would want to start with a 4 drive RAIDZ1, fill it up over the years and add another 4 drive RAIDZ1 to the pool later.
Is it possible to "force" zfs to reallocate data from the old vdev (nearly full) to the newly added (completely empty) vdev to gain the speed benefits of 2 striped vdevs with evenly distributed data?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Is it possible to "force" zfs to reallocate data from the old vdev (nearly full) to the newly added (completely empty) vdev to gain the speed benefits of 2 striped vdevs with evenly distributed data?
No.

The closest thing you can do is zfs send | zfs recv from one dataset to another.
 
Status
Not open for further replies.
Top