The larger vdev is the original from when the pool was created. The smaller vdev was added later on. Pretty much everyone everywhere for all eternity have been hammering home the concept that writes are allocated based on the vdev's free space relative to its size (e.g. in a pool with vdev "a" having 4TB free and vdev "b" having 1TB free, vdev "a" will see 4x the writes).
Well, that didn't happen here. The good news is writes to the smaller vdev have stopped and the system is working fine. What knob did I twiddle that broke the space allocator?
Only I think in our system small vdev were added same time as the bigger one. Creation were performed using GUI (legacy GUI because I can't stand new one, but thats differnt topic):
And small vdev received substantially More writes then bigger one
In the first few months 500G disks had absolutely more then twice "allocated" G - and writes - then 4T disks, with a thing to note that now, numbers are closer to equal (though only in G, not in %)
Configuration were made out of curiosity and then kept, because it shown considerably better performance then pure 2x4T disk zpool
Now however... that 73% fragmentation on smaller vdev seems to me very unhealthy. And we host VMs there.
So question is, how to make it real
that writes are allocated based on the vdev's free space
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.