ZPOOL status list shows gpt/da/old

Status
Not open for further replies.

swat565

Dabbler
Joined
Oct 19, 2011
Messages
14
Here's a picture of the array (its set as striped RAIDZ-1). You can see that it still shows old disk entries and shows as degraded, but if you count the online disks, its 16 (the total I have). Is there any way to fix/clean this up? I'd like to be able to see when the array is actually degraded...

zfs capture.JPG
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
My thoughts are "I hope you have a backup"....

What a mess, I think if you try to fix it you'll make it worse. Those disks that say "replacing" are a problem, how long has this been like this?

Cringe....
 

swat565

Dabbler
Joined
Oct 19, 2011
Messages
14
Awhile? 2-3 months? its fully rebuilt (I can pull a drive while its on and will still function), just has entries of old disks :/

Edit: Why are they a problem, maybe I'm misunderstanding whats going on here?
 

hz3701

Cadet
Joined
Jul 17, 2012
Messages
2
I have the same problem!

pool: FTPAREA

state: DEGRADED

scrub: scrub completed after 0h53m with 0 errors on Tue Jul 17 03:01:48 2012

config:

NAME STATE READ WRITE CKSUM

FTPAREA DEGRADED 0 0 0

raidZ2 DEGRADED 0 0 0

gpt/da0 ONLINE 0 0 0

replacing DEGRADED 0 0 0

12765628750481214324 UNAVAIL 0 0 0 was /dev/gpt/da1/old

gpt/da1 ONLINE 0 0 0

gpt/da2 ONLINE 0 0 0

gpt/da3 ONLINE 0 0 0

gpt/da4 ONLINE 0 0 0

gpt/da5 ONLINE 0 0 0

error: No known data errors
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
hz3701,

You're a little different, only one disk. I supposed you both can try this, but @swat565, just do one at a time and maybe a scrub in between each.

hz3701 try this:

zpool detach 12765628750481214324
 

hz3701

Cadet
Joined
Jul 17, 2012
Messages
2
Thank you, protosd.
The device 12765628750481214324 disappeared, When I run zpool detach FTPAREA 12765628750481214324.
The pool status went back online!
Thank you!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Awhile? 2-3 months? its fully rebuilt (I can pull a drive while its on and will still function), just has entries of old disks :/

Edit: Why are they a problem, maybe I'm misunderstanding whats going on here?

There are some bugs with this version of ZFS. Several people have lost their pools trying to clear the "replacing" message after their pools have apparently finished resilvering. I might try booting a rescue CD of PCBSD or FreeBSD 9, or FreeNAS 8.3 Alpha, which all have ZFS v28. Don't upgrade your pool, but import it, and depending what zpool status -v says, you could try detaching those "old" drives like I mentioned above.

Sorry for the delayed response, I'm on vacation and have a lot going on as usual ;)
 
Status
Not open for further replies.
Top