- Joined
- Dec 11, 2015
- Messages
- 1,410
EDIT:
This turned into somewhat "documenting the journey"-kind of post.
________
Hello,
(Version: TrueNAS-13.0-U4)
I'd like to rebuild my pool, with the purpose to shrink my 2vdevs width from 7 to 6 drives, hopefully "somewhat gracefully".
Most data is backed up. Yet, I still need to utilize one or 2 drives from the current pool during the "rebuild window".
I expected this to be quite simple and straight forward, by first offline a drive, wipe it in GUI (crucially: quick), as the drive became available in the list of drives for pool creation, it would accept a new pool. It does, halfway through the creation until it throws this error:
For the sake of documenting my ill-adviced hackery;
Next I did online the drives again (one from each vdev).
What happened then surprised me a little bit.
The gui gladly accepted the freshly wiped drives, without any resilvering or scan. All boxes green?!
As the drives were wiped the
I proceed to attempt to offline the drives again, to reach into further zfs work.
At this point the drives would no longer accept being "offlined". nothing happens in the GUI.
At this point it is obvious the situation is unfolding in an unstable way.
I proceeded to CLI, attempting to offline a drive;
This is when I start a scrub to at least return to a better starting point, waiting the next step, and reaching out for advice with this post.
A sketchy path that comes to mind;
Maybe it would work if I offline, wipe the drives, then export the pool. Create a "tanktemp" pool with the bufferzone drives for the migration and finally import back the larger wd60efrx pool? The part I don't like is that this seemingly carries significantly more risk, with issues of importing a "confused" (or what else to call it from experiences above?) and simultaneously degraded pool.
I hope for any advice going forward.
Cheers,
This turned into somewhat "documenting the journey"-kind of post.
________
Hello,
(Version: TrueNAS-13.0-U4)
I'd like to rebuild my pool, with the purpose to shrink my 2vdevs width from 7 to 6 drives, hopefully "somewhat gracefully".
Most data is backed up. Yet, I still need to utilize one or 2 drives from the current pool during the "rebuild window".
I expected this to be quite simple and straight forward, by first offline a drive, wipe it in GUI (crucially: quick), as the drive became available in the list of drives for pool creation, it would accept a new pool. It does, halfway through the creation until it throws this error:
Error: ('one or more vdevs refer to the same device, or one of\nthe devices is part of an active md or lvm device',)
For the sake of documenting my ill-adviced hackery;
Next I did online the drives again (one from each vdev).
What happened then surprised me a little bit.
The gui gladly accepted the freshly wiped drives, without any resilvering or scan. All boxes green?!
As the drives were wiped the
gptid label
was correctly lost, giving signs something did change:Code:
wd60efrx ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/14ef1fa6-e0a4-11e5-b134-0cc47ab3208c ONLINE 0 0 0 gptid/15c495ba-e0a4-11e5-b134-0cc47ab3208c ONLINE 0 0 0 gptid/16990bee-e0a4-11e5-b134-0cc47ab3208c ONLINE 0 0 0 gptid/1769399b-e0a4-11e5-b134-0cc47ab3208c ONLINE 0 0 0 gptid/18479def-e0a4-11e5-b134-0cc47ab3208c ONLINE 0 0 0 gptid/1911207e-e0a4-11e5-b134-0cc47ab3208c ONLINE 0 0 0 da12p2 ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/7fe401b9-d3a0-11ec-9bc5-00259051e3f2 ONLINE 0 0 0 da11p2 ONLINE 0 0 0 gptid/81b84a85-d3a0-11ec-9bc5-00259051e3f2 ONLINE 0 0 0 gptid/81c61691-d3a0-11ec-9bc5-00259051e3f2 ONLINE 0 0 0 gptid/82601510-d3a0-11ec-9bc5-00259051e3f2 ONLINE 0 0 0 gptid/a333f8c5-6a61-11ed-b21b-ac1f6bb3a54c ONLINE 0 0 0 gptid/8364ed1f-d3a0-11ec-9bc5-00259051e3f2 ONLINE 0 0 0
I proceed to attempt to offline the drives again, to reach into further zfs work.
At this point the drives would no longer accept being "offlined". nothing happens in the GUI.
At this point it is obvious the situation is unfolding in an unstable way.
I proceeded to CLI, attempting to offline a drive;
zpool offline wd60efrx da12p2
That did not work either (no error code, no change)zpool list -v wd60efrx
would still show the drive as online. Despite being most recently wiped.This is when I start a scrub to at least return to a better starting point, waiting the next step, and reaching out for advice with this post.
A sketchy path that comes to mind;
Maybe it would work if I offline, wipe the drives, then export the pool. Create a "tanktemp" pool with the bufferzone drives for the migration and finally import back the larger wd60efrx pool? The part I don't like is that this seemingly carries significantly more risk, with issues of importing a "confused" (or what else to call it from experiences above?) and simultaneously degraded pool.
I hope for any advice going forward.
Cheers,
Last edited: