everythingisonfire
Cadet
- Joined
- Aug 24, 2022
- Messages
- 9
I have two pools in raidz2 with 8 disks each. Each pool had one disk showing signs of going bad, so I decided to replace them both at the same time.
I've marked the two disks as offline in the freenas web UI, removed the 2 disks, inserted 2 new disks.
I gave the command to replace the disk for the "tank" pool from the web UI. Everything fine and it started resilvering.
The web UI then decided to become unresponsive when trying to view the volume status for the "backups" pool, so I gave a command to replace the other disk from SSH.
I must have done something wrong because now the pool "tank" is reporting 9 disks with 2 marked as being resilvered at the same time, and the pool "backups" is as I left it (8 disks, of which one is marked as offline).
Somehow running "history" via SSH does not report the previous zpool replace command I ran, but looking at the result I suspect I ran something like
zpool replace tank deviceIdOfTheRemovedDriveInBackups newDeviceName
or
zpool replace backups deviceIdOfTheRemovedDriveInTank newDeviceName
instead of
zpool replace backups deviceIdOfTheRemovedDriveInBackups newDeviceName
Web UI is still unresponsive when trying to view the volume status for the backups pool, but that's a minor concern right now.
How can I remediate? (have 2 pools with 8 disks each)
Thank you!!
I've marked the two disks as offline in the freenas web UI, removed the 2 disks, inserted 2 new disks.
I gave the command to replace the disk for the "tank" pool from the web UI. Everything fine and it started resilvering.
The web UI then decided to become unresponsive when trying to view the volume status for the "backups" pool, so I gave a command to replace the other disk from SSH.
I must have done something wrong because now the pool "tank" is reporting 9 disks with 2 marked as being resilvered at the same time, and the pool "backups" is as I left it (8 disks, of which one is marked as offline).
Code:
[root@nas01] ~# zpool status -x pool: backups state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: scrub repaired 846K in 521h30m with 0 errors on Sun Aug 14 17:30:42 2022 config: NAME STATE READ WRITE CKSUM backups DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/1b182b9d-bb16-11ea-be28-003048d504aa ONLINE 0 0 0 6335302298534030911 OFFLINE 0 0 0 was /dev/gptid/5917b5c2-ff88-11e6-87c1-003048d504aa gptid/59b11de3-ff88-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/5a4b8236-ff88-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/5adfbcd0-ff88-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/5b768f4d-ff88-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/5c151242-ff88-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/5cb278e0-ff88-11e6-87c1-003048d504aa ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Aug 8 05:59:14 2022 13.9T scanned out of 23.8T at 10.3M/s, 278h49m to go 3.42T resilvered, 58.42% done config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/c6946e38-bb14-11ea-be28-003048d504aa ONLINE 0 0 0 gptid/60dd0414-fe04-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/61724eb6-fe04-11e6-87c1-003048d504aa ONLINE 0 0 0 replacing-3 ONLINE 0 0 0 da14 ONLINE 0 0 0 block size: 512B configured, 4096B native (resilvering) gptid/c9c295a1-1719-11ed-9c40-003048d504aa ONLINE 0 0 0 block size: 512B configured, 4096B native (resilvering) gptid/62927254-fe04-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/6321d982-fe04-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/63b84530-fe04-11e6-87c1-003048d504aa ONLINE 0 0 0 gptid/64497fe6-fe04-11e6-87c1-003048d504aa ONLINE 0 0 0 errors: No known data errors
Somehow running "history" via SSH does not report the previous zpool replace command I ran, but looking at the result I suspect I ran something like
zpool replace tank deviceIdOfTheRemovedDriveInBackups newDeviceName
or
zpool replace backups deviceIdOfTheRemovedDriveInTank newDeviceName
instead of
zpool replace backups deviceIdOfTheRemovedDriveInBackups newDeviceName
Web UI is still unresponsive when trying to view the volume status for the backups pool, but that's a minor concern right now.
How can I remediate? (have 2 pools with 8 disks each)
Thank you!!