Chris Moore
Hall of Famer
- Joined
- May 2, 2015
- Messages
- 10,079
I am throwing this out there so other people can see this and perhaps use this information to understand what is going wrong with their system. I think I know what the problem is with my system, but I will be happy to learn from anyone that has wisdom to share.
I replaced some "old" 2 TB drives in my system with new 4 TB drives (an entire vdev) which caused me to need to destroy the pool, recreate it with the new dives and copy the data back. Based on past performance of the system, this should have taken about 2.5 hours. Six hours later, it is still not done.
Looking into the issue, I found that three of the "new" drives are not performing as they should.
The output below is from
If you notice the numbers I highlighted, they are significantly lower for those three drives and that is bringing down the performance of the entire vdev, see how it is 46.8M per drive (because all the drives are loaded equally) but in the other vdev the drives are all 58.7M. All the drives in vdev-1 are running slower because of those three drives, and a single drive could have this effect. So, one drive could drag down the performance of the vdev and one vdev drags down the performance of the pool. The overall performance is less than half what it should be.
I replaced some "old" 2 TB drives in my system with new 4 TB drives (an entire vdev) which caused me to need to destroy the pool, recreate it with the new dives and copy the data back. Based on past performance of the system, this should have taken about 2.5 hours. Six hours later, it is still not done.
Looking into the issue, I found that three of the "new" drives are not performing as they should.
The output below is from
zpool iostat -v
and I used a screen capture so I could mark it:
Last edited: