Unable to add drive back to ZFS pool

Status
Not open for further replies.

brwatters

Dabbler
Joined
Jun 19, 2012
Messages
12
I posted this yesterday but its gone from the forum so I will now repost.

WE had a drive that showed bad so we took it offline and reformatted it . now we are attempting to bring it back online and or add back to the pool and its failing with the following, Not sure how to proceed.

FreeNAS-8.3.0-RELEASE-p1-x64 (r12825)

Feb 11 15:56:51 freenas notifier: swapoff: /dev/da19p1: Invalid argument
Feb 11 15:56:51 freenas notifier: 1+0 records in
Feb 11 15:56:51 freenas notifier: 1+0 records out
Feb 11 15:56:51 freenas notifier: 1048576 bytes transferred in 0.063011 secs (16641176 bytes/sec)
Feb 11 15:56:52 freenas notifier: dd: /dev/da19: short write on character device
Feb 11 15:56:52 freenas notifier: dd: /dev/da19: end of device
Feb 11 15:56:52 freenas notifier: 5+0 records in
Feb 11 15:56:52 freenas notifier: 4+1 records out
Feb 11 15:56:52 freenas notifier: 4608000 bytes transferred in 0.332177 secs (13872116 bytes/sec)
Feb 11 15:56:54 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk replacement failed: "invalid vdev specification, use '-f' to override the following errors:, /dev/gptid/2e1bc2bc-9378-11e3-84b2-00142223ebd1 is part of active pool 'StorageZFS', "]


zpool status
pool: Storage
state: ONLINE
scan: scrub repaired 0 in 0h27m with 0 errors on Thu Jan 30 17:18:55 2014
config:

NAME STATE READ WRITE CKSUM
Storage ONLINE 0 0 0
gptid/5ff59bcf-bbd6-11e1-a51d-00142223ebd1 ONLINE 0 0 0

errors: No known data errors

pool: StorageZFS
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: scrub repaired 0 in 2h4m with 0 errors on Tue Feb 11 18:20:53 2014
config:

NAME STATE READ WRITE CKSUM
StorageZFS DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/65666bda-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6654e226-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/67437b18-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6831fe50-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/692bc5e1-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
386049288687119066 OFFLINE 1 0 0 was /dev/gptid/6a259afa-ba50-11e1-9b17-000bdbe2a1b2
gptid/6b17be11-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6c095ba4-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6cf7bbe5-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6dec3228-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6ee6fbd7-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6fe5e043-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/70de8a86-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/71d998c8-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/72d885cb-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/73d5dccc-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/74d452ce-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/75da5545-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/76df0b8f-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/77df7a6f-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/78da078f-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/79d8bb72-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ae0c177-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7be1e9c7-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ce9b4b2-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7dedf763-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ef40382-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ff2e7e6-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0

errors: No known data errors
 

brwatters

Dabbler
Joined
Jun 19, 2012
Messages
12
More info

zpool clear StorageZFS 386049288687119066

zpool status
pool: Storage
state: ONLINE
scan: scrub repaired 0 in 0h27m with 0 errors on Thu Jan 30 17:18:55 2014
config:

NAME STATE READ WRITE CKSUM
Storage ONLINE 0 0 0
gptid/5ff59bcf-bbd6-11e1-a51d-00142223ebd1 ONLINE 0 0 0

errors: No known data errors

pool: StorageZFS
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scan: scrub repaired 0 in 2h4m with 0 errors on Tue Feb 11 18:20:53 2014
config:

NAME STATE READ WRITE CKSUM
StorageZFS DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/65666bda-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6654e226-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/67437b18-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6831fe50-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/692bc5e1-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
386049288687119066 UNAVAIL 0 0 0 was /dev/dsk/da19p2
gptid/6b17be11-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6c095ba4-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6cf7bbe5-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6dec3228-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6ee6fbd7-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/6fe5e043-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/70de8a86-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/71d998c8-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/72d885cb-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/73d5dccc-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/74d452ce-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/75da5545-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/76df0b8f-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/77df7a6f-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/78da078f-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/79d8bb72-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ae0c177-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7be1e9c7-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ce9b4b2-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7dedf763-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ef40382-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0
gptid/7ff2e7e6-ba50-11e1-9b17-000bdbe2a1b2 ONLINE 0 0 0

errors: No known data errors


zpool online StorageZFS 386049288687119066
warning: device '386049288687119066' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
I am going to go ahead and ask a (probably) stupid question...
Are those 28 drives in the same raidz1 vdev?

(to make your command line outputs easier readable you can use 'code' tags)
 

term

Dabbler
Joined
Jan 14, 2014
Messages
10
From the error message, it looks like it knows that the disk was already part of the pool, and requires you to use a special override command. Since I'm sure its best to use the GUI to do this procedure, I doubt you will be able to do that.

The right thing to do would be to get a fresh disk and use that instead. With 28 disks in a Z1 array, I hope your data is backed up somewhere else.
 

brwatters

Dabbler
Joined
Jun 19, 2012
Messages
12
Thanks for the URL .. I have already been there and done that before opening this issue .. again the issue appears that the drive that was taken offline and then detached did in fact not happen and now even though its no longer in the pool of drives and the server has been rebooted with the NEW drive in place that new drive does not appear to be available via the GUI or CLI and the OLD drive still shows as being present both via GUI and CLI, I have confirmed the drive (NEW) is seen in bootup

camcontrol devlist
<SEAGATE ST3146707LC D704> at scbus0 target 0 lun 0 (pass0,da0)
<SEAGATE ST3146707LC D704> at scbus0 target 1 lun 0 (pass1,da1)
<SEAGATE ST3146707LC D704> at scbus0 target 2 lun 0 (pass2,da2)
<SEAGATE ST3146707LC D704> at scbus0 target 3 lun 0 (pass3,da3)
<SEAGATE ST3146707LC D704> at scbus0 target 4 lun 0 (pass4,da4)
<SEAGATE ST3146707LC D704> at scbus0 target 5 lun 0 (pass5,da5)
<DELL PV22XS E.19> at scbus0 target 6 lun 0 (ses0,pass6)
<SEAGATE ST3146707LC D704> at scbus0 target 8 lun 0 (pass7,da6)
<SEAGATE ST3146707LC D704> at scbus0 target 9 lun 0 (pass8,da7)
<SEAGATE ST3146707LC D704> at scbus0 target 10 lun 0 (pass9,da8)
<SEAGATE ST3146707LC D704> at scbus0 target 11 lun 0 (pass10,da9)
<SEAGATE ST3146707LC D704> at scbus0 target 12 lun 0 (pass11,da10)
<SEAGATE ST3146707LC D704> at scbus0 target 13 lun 0 (pass12,da11)
<SEAGATE ST3146707LC D704> at scbus0 target 14 lun 0 (pass13,da12)
<SEAGATE ST3146707LC D704> at scbus0 target 15 lun 0 (pass14,da13)
<SEAGATE ST3146707LC D704> at scbus1 target 0 lun 0 (pass15,da14)
<SEAGATE ST3146707LC D704> at scbus1 target 1 lun 0 (pass16,da15)
<SEAGATE ST3146707LC D704> at scbus1 target 2 lun 0 (pass17,da16)
<SEAGATE ST3146707LC D704> at scbus1 target 3 lun 0 (pass18,da17)
<SEAGATE ST3146707LC D704> at scbus1 target 4 lun 0 (pass19,da18)
<FUJITSU MAW3147NC 0104> at scbus1 target 5 lun 0 (pass20,da19)
<DELL PV22XS E.19> at scbus1 target 6 lun 0 (ses1,pass21)
<SEAGATE ST3146707LC D704> at scbus1 target 8 lun 0 (pass22,da20)
<SEAGATE ST3146707LC D704> at scbus1 target 9 lun 0 (pass23,da21)
<SEAGATE ST3146707LC D704> at scbus1 target 10 lun 0 (pass24,da22)
<SEAGATE ST3146707LC D704> at scbus1 target 11 lun 0 (pass25,da23)
<SEAGATE ST3146707LC D704> at scbus1 target 12 lun 0 (pass26,da24)
<SEAGATE ST3146707LC D704> at scbus1 target 13 lun 0 (pass27,da25)
<SEAGATE ST3146707LC D704> at scbus1 target 14 lun 0 (pass28,da26)
<SEAGATE ST3146707LC D704> at scbus1 target 15 lun 0 (pass29,da27)
<DELL VSF 0123> at scbus2 target 0 lun 0 (pass30,da28)
<DELL VCD 0133> at scbus2 target 1 lun 0 (pass31,cd0)
<HL-DT-ST DVD-ROM GDR8084N 1.01> at scbus4 target 0 lun 0 (pass32,cd1)


But its not shown here

glabel status
Name Status Components
ufs/FreeNASs3 N/A amrd0s3
ufs/FreeNASs4 N/A amrd0s4
gptid/5ff59bcf-bbd6-11e1-a51d-00142223ebd1 N/A amrd1p2
ufsid/4fda1b4ab3d6b92a N/A amrd0s1a
ufs/FreeNASs1a N/A amrd0s1a
ufs/FreeNASs2a N/A amrd0s2a
gptid/72d885cb-ba50-11e1-9b17-000bdbe2a1b2 N/A da0p2
gptid/73d5dccc-ba50-11e1-9b17-000bdbe2a1b2 N/A da1p2
gptid/74d452ce-ba50-11e1-9b17-000bdbe2a1b2 N/A da2p2
gptid/75da5545-ba50-11e1-9b17-000bdbe2a1b2 N/A da3p2
gptid/76df0b8f-ba50-11e1-9b17-000bdbe2a1b2 N/A da4p2
gptid/77df7a6f-ba50-11e1-9b17-000bdbe2a1b2 N/A da5p2
gptid/78da078f-ba50-11e1-9b17-000bdbe2a1b2 N/A da6p2
gptid/79d8bb72-ba50-11e1-9b17-000bdbe2a1b2 N/A da7p2
gptid/7ae0c177-ba50-11e1-9b17-000bdbe2a1b2 N/A da8p2
gptid/7be1e9c7-ba50-11e1-9b17-000bdbe2a1b2 N/A da9p2
gptid/7ce9b4b2-ba50-11e1-9b17-000bdbe2a1b2 N/A da10p2
gptid/7dedf763-ba50-11e1-9b17-000bdbe2a1b2 N/A da11p2
gptid/7ef40382-ba50-11e1-9b17-000bdbe2a1b2 N/A da12p2
gptid/7ff2e7e6-ba50-11e1-9b17-000bdbe2a1b2 N/A da13p2
gptid/65666bda-ba50-11e1-9b17-000bdbe2a1b2 N/A da14p2
gptid/6654e226-ba50-11e1-9b17-000bdbe2a1b2 N/A da15p2
gptid/67437b18-ba50-11e1-9b17-000bdbe2a1b2 N/A da16p2
gptid/6831fe50-ba50-11e1-9b17-000bdbe2a1b2 N/A da17p2
gptid/692bc5e1-ba50-11e1-9b17-000bdbe2a1b2 N/A da18p2
gptid/6b17be11-ba50-11e1-9b17-000bdbe2a1b2 N/A da20p2
gptid/6c095ba4-ba50-11e1-9b17-000bdbe2a1b2 N/A da21p2
gptid/6cf7bbe5-ba50-11e1-9b17-000bdbe2a1b2 N/A da22p2
gptid/6dec3228-ba50-11e1-9b17-000bdbe2a1b2 N/A da23p2
gptid/6ee6fbd7-ba50-11e1-9b17-000bdbe2a1b2 N/A da24p2
gptid/6fe5e043-ba50-11e1-9b17-000bdbe2a1b2 N/A da25p2
gptid/70de8a86-ba50-11e1-9b17-000bdbe2a1b2 N/A da26p2
gptid/71d998c8-ba50-11e1-9b17-000bdbe2a1b2 N/A da27p2

Also when going to hit replace in the GUI you see (See attached) that no new drive is listed nor available to replace OLD ..
 

Attachments

  • freenas   FreeNAS 8.3.0 RELEASE p1 x64  r12825 .png
    freenas FreeNAS 8.3.0 RELEASE p1 x64 r12825 .png
    35.8 KB · Views: 191

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No clue off the top of my head. BUT, I will say... someone is absolutely suicidal to put even 1/5th that many disks in a RAIDZ1. I can't say I'm even the least bit surprised with your problem. I have to wonder how many other things you've done wrong that may be adding unnecessary problems.

To be completely honest, if those disks are 140GB or so a piece I'd get 2x4TB drives an replace them all. The power saved by not spinning that many disks will probably be saved in 6 months.
 

brwatters

Dabbler
Joined
Jun 19, 2012
Messages
12
Cyberjock .. Thanks for the incredibly insightful and uplifting reply ..like many in IT what is best practice vs what is on hand are two different things in the real world.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Cyberjock .. Thanks for the incredibly insightful and uplifting reply ..like many in IT what is best practice vs what is on hand are two different things in the real world.

I agree. Unfortunately, ZFS won't forgive you if you decide to do things in "the real world" that don't match "best practice". It's an extremely slippery slope.
 
Status
Not open for further replies.
Top