RchGrav
Dabbler
- Joined
- Feb 21, 2014
- Messages
- 36
Hi Guys,
Looks like I have a pool in a degraded state, I think because the hotspares kicked in. This system is running FreeNAS 11.1-U4. Please take a look at this zpool status. I assume I need to detach the faulted drives and the spares will become permanent from what I have read.
Do I need to.. ?
Once I identify and remove the failed disks I would assume the best thing to do is to replace them and reenable them as hot spares.. I have never done this so any kind souls out there who could assist with some steps w/ commands, or recommendations in this en-devour it would be appreciated. Like do I need to manually resilver the spares, or what do I expect to see happen at each point in the process to know things are going well.. I think I'm in good shape here, but any reassurances are appreciated.
Thank you,
Rich
Looks like I have a pool in a degraded state, I think because the hotspares kicked in. This system is running FreeNAS 11.1-U4. Please take a look at this zpool status. I assume I need to detach the faulted drives and the spares will become permanent from what I have read.
Code:
root@silo:~ # zpool status pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0 days 00:00:11 with 0 errors on Sun Oct 7 03:45:11 2018 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 ada1p2 ONLINE 0 0 0 errors: No known data errors pool: tank state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 0 days 08:43:03 with 0 errors on Sun Sep 30 08:43:21 2018 config: NAME STATE READ WRITE C KSUM tank DEGRADED 0 0 0 mirror-0 ONLINE 0 0 0 gptid/284957e0-54f7-11e5-952d-0cc47a34f672 ONLINE 0 0 0 gptid/28aa3d5b-54f7-11e5-952d-0cc47a34f672 ONLINE 0 0 0 mirror-1 DEGRADED 0 0 0 gptid/5560a54f-54f7-11e5-952d-0cc47a34f672 ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 gptid/55bfeb35-54f7-11e5-952d-0cc47a34f672 FAULTED 9 5 0 too many errors gptid/a25d0618-5b03-11e5-ba30-0cc47a34f672 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/8cb54a53-54f7-11e5-952d-0cc47a34f672 ONLINE 0 0 0 gptid/b3e42571-5695-11e5-aa4b-0cc47a34f672 ONLINE 0 0 0 mirror-3 DEGRADED 0 0 0 gptid/f21b968f-54f7-11e5-952d-0cc47a34f672 ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 gptid/f2802035-54f7-11e5-952d-0cc47a34f672 FAULTED 15 379 0 too many errors gptid/5cd20210-5b03-11e5-ba30-0cc47a34f672 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gptid/524b645d-54fc-11e5-952d-0cc47a34f672 ONLINE 0 0 0 gptid/e009e2d6-54fd-11e5-952d-0cc47a34f672 ONLINE 0 0 0 spares 10021450293346954392 INUSE was /dev/gpt id/5cd20210-5b03-11e5-ba30-0cc47a34f672 12468965327649564557 INUSE was /dev/gpt id/a25d0618-5b03-11e5-ba30-0cc47a34f672 errors: No known data errors root@silo:~ #
Do I need to.. ?
zpool detach gptid/55bfeb35-54f7-11e5-952d-0cc47a34f672
zpool detech gptid/f2802035-54f7-11e5-952d-0cc47a34f672
Once I identify and remove the failed disks I would assume the best thing to do is to replace them and reenable them as hot spares.. I have never done this so any kind souls out there who could assist with some steps w/ commands, or recommendations in this en-devour it would be appreciated. Like do I need to manually resilver the spares, or what do I expect to see happen at each point in the process to know things are going well.. I think I'm in good shape here, but any reassurances are appreciated.
Thank you,
Rich