Hi all, and in advance thank you for your help and patience.
Build FreeNAS-9.10.1 (d989edd)
Platform AMD A6-6400K APU with Radeon(tm) HD Graphics
Memory 15269MB
I have the above build with a number of raidsets all running fine. One particular disk started to show some errors so I started a replace on it through the GUI. Whilst the replace was running the disk failed, this was a few weeks ago and whilst the server is "running" normally the replace never appears to complete, although the resliver does and I have run a scrub ok.
The following is the output from a zpool staus
---------------------------------------------------------------------------
[root@xxxxxxxxx] ~# zpool status Raid5Root
pool: Raid5Root
state: DEGRADED
scan: scrub repaired 0 in 8h50m with 0 errors on Sun Apr 2 08:50:23 2017
config:
NAME STATE READ WRITE CKSUM
Raid5Root DEGRADED 0 0 0
gptid/5df43782-8177-11e4-8275-002590394642 ONLINE 0 0 0
gptid/5e591c2d-8177-11e4-8275-002590394642 ONLINE 0 0 0
gptid/5ec42610-8177-11e4-8275-002590394642 ONLINE 0 0 0
replacing-3 DEGRADED 0 0 0
13953885356757872590 OFFLINE 0 0 0 was /dev/gptid/5f26c751-8177-11e4-8275-002590394642
gptid/01438f44-77e3-11e6-b673-6805ca443bbe ONLINE 0 0 0
errors: No known data errors
---------------------------------------------------------------------------
I have scanned the forums and there are references to "detaching" the disk via the volume status GUI, however I can see the failed disk in the GUI, and its showing as offline, however there is no "detach" option when I select it, only replace which looks as if its going to restart the replace process.
Could anyone offer some advice, I am very conscious that the replacing-3 set is showing degraded so if I lose one of the others I am going to have to rebuild and restore which I would like to try to avoid, and I wanted to check with experts before I made the situation worse.
Build FreeNAS-9.10.1 (d989edd)
Platform AMD A6-6400K APU with Radeon(tm) HD Graphics
Memory 15269MB
I have the above build with a number of raidsets all running fine. One particular disk started to show some errors so I started a replace on it through the GUI. Whilst the replace was running the disk failed, this was a few weeks ago and whilst the server is "running" normally the replace never appears to complete, although the resliver does and I have run a scrub ok.
The following is the output from a zpool staus
---------------------------------------------------------------------------
[root@xxxxxxxxx] ~# zpool status Raid5Root
pool: Raid5Root
state: DEGRADED
scan: scrub repaired 0 in 8h50m with 0 errors on Sun Apr 2 08:50:23 2017
config:
NAME STATE READ WRITE CKSUM
Raid5Root DEGRADED 0 0 0
gptid/5df43782-8177-11e4-8275-002590394642 ONLINE 0 0 0
gptid/5e591c2d-8177-11e4-8275-002590394642 ONLINE 0 0 0
gptid/5ec42610-8177-11e4-8275-002590394642 ONLINE 0 0 0
replacing-3 DEGRADED 0 0 0
13953885356757872590 OFFLINE 0 0 0 was /dev/gptid/5f26c751-8177-11e4-8275-002590394642
gptid/01438f44-77e3-11e6-b673-6805ca443bbe ONLINE 0 0 0
errors: No known data errors
---------------------------------------------------------------------------
I have scanned the forums and there are references to "detaching" the disk via the volume status GUI, however I can see the failed disk in the GUI, and its showing as offline, however there is no "detach" option when I select it, only replace which looks as if its going to restart the replace process.
Could anyone offer some advice, I am very conscious that the replacing-3 set is showing degraded so if I lose one of the others I am going to have to rebuild and restore which I would like to try to avoid, and I wanted to check with experts before I made the situation worse.