cmh
Explorer
- Joined
- Jan 7, 2013
- Messages
- 75
Sorry if this has been asked before but when I use search the links just lead to blank pages. Tried different browsers and even had other folks look at the links I was getting, same result. Unsure what's up there.
I've got a simple NAS setup - four 4TB drives in a dual mirror stripe. Getting close to the 80% zone where everything is predicted to slow down, so I'm getting some 8TB drives for the upgrade.
Trying to decide between doing the upgrade-in-place by failing one drive and rebuilding on the 8 and then working through the array until it's all 8TB drives, at which point it should see the increased space. This should be fine, but the issue is when I first set up the NAS I had just two drives, and put data on it. As I ran out of room I added the other two drives, which means that some of the data is heavily loaded on the first two spindles. I can see this in my graphite metrics on certain read-heavy operations as the read on those two spindles is much heavier at the start.
So before I do the swap I'm thinking I have a couple options, curious what folks think the best approach might be:
Not afraid of doing the zfs send/recv manually as I've done that several times with ZFS on linux and such.
Thanks!
I've got a simple NAS setup - four 4TB drives in a dual mirror stripe. Getting close to the 80% zone where everything is predicted to slow down, so I'm getting some 8TB drives for the upgrade.
Trying to decide between doing the upgrade-in-place by failing one drive and rebuilding on the 8 and then working through the array until it's all 8TB drives, at which point it should see the increased space. This should be fine, but the issue is when I first set up the NAS I had just two drives, and put data on it. As I ran out of room I added the other two drives, which means that some of the data is heavily loaded on the first two spindles. I can see this in my graphite metrics on certain read-heavy operations as the read on those two spindles is much heavier at the start.
So before I do the swap I'm thinking I have a couple options, curious what folks think the best approach might be:
- Leave it as is, just do the fail/swap to upgrade. Data's unbalanced but it's been like that for a couple years now and I don't notice any obvious ill effects.
- Do the fail/swap to upgrade, then use zfs send/recv to create a new copy of the datasets, once they're in sync, rename the two so the new copy replaces the original. This way the data will be evenly loaded across all four disks.
- Set up the array in a different host (no space to install the four drives in the current system) and zfs send/recv over to there, then once they're in sync, shut the hosts down and move the drives.
- ...something I haven't thought of yet.
Not afraid of doing the zfs send/recv manually as I've done that several times with ZFS on linux and such.
Thanks!