vdev spanning

Status
Not open for further replies.

The Gecko

Dabbler
Joined
Sep 16, 2013
Messages
18
Is there a command I can issue (zdb ?) to discover all the vdevs a particular file sits on?

I built a pool consisting of two vdevs and moved a several TB onto the pool. Later, I added a third vdev to the pool. The downside is that over time and with use, the two original vdevs dropped to 1 TB free while the third vdev still had 4 TB free. I'm looking to identify which files do not span all three vdevs and perform a copy/delete to redistribute the content across all three vdevs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
AFAIK there is no way to easily identify what files are on a particular vdev. I say "easily" because there's no doubt that the information does exist(especially since ZFS has to track that info) but assembly of such information is probably horribly difficult if you don't know how to do it.

I will tell you that trying to redistribute the disk space is pretty much pointless. People keep asking about this, and it really doesn't matter as much as people want to think. ZFS will naturally prefer the "free" vdev for new writes(not new files, new writes) so you will end up with a fairly even distribution.

At the end of the day, if you are having performance problems that you think are going to be resolved by redistribution if your data, you're ignoring many simple facts like:

1. You can't defrag files on ZFS.. and fragmentation only gets worse the more a pool is used.2
2. The performance gain is actually pretty minimumal.
3. There's plenty of ways to see performance improvements, like more RAM, a ZIL and L2ARC(when appropriate).

From my perspective, you're asking for little to no gain for a problem that isn't a problem and you have other bigger ways you could get more performance than what you are trying to do. Go for the low hanging fruit first. Nobody tries to do the most difficult task while simultaneously expecting only minor gains(if any... performance can actually go down if fragmentation is bad enough).
 
Status
Not open for further replies.
Top