Attempt to replace disk in pool creates new disk to poll?

Status
Not open for further replies.

Anthony E

Cadet
Joined
Sep 2, 2015
Messages
5
I have a 5 disk RaidZ configuration..
The smart detected errors on 2 disk (smart wasn't enabled for a while)
I attached a Esata hard drive to replace disk while pool is active and original disk is still connected .. When resilver completed.. The 5 disk pool is now a 6 disk pool.. When i attempted to remove the first faulty drive it now in a degraded status..
Wondering if anyone else came across this issue ?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How, exactly, did you attempt to do the replacement? Because it sounds like you added a new disk to the pool, rather than replacing a disk.

Oh, and things the forum rules (which you said you read and agreed to when you signed up today) require like hardware and FreeNAS version would be helpful in figuring out exactly what's going on.
 

Anthony E

Cadet
Joined
Sep 2, 2015
Messages
5
I connected a new 1tb hdd through esata, when through the gui and clicked on defective disk. Initiated replace and pointed to new disk. It went through resilver. After completion rebooted and swapped disk. Now pool shows 6 disk 1 unavailable.

Hardware:
I7-930
32gb ddr3 1600mhz
Zfs raid on sata2 on board 5 disk configuration.
16gb boot ssd
256gb zil drive
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What is the output of the zpool status command. Post the results in code tags so it preserves formatting.
 

Anthony E

Cadet
Joined
Sep 2, 2015
Messages
5
Here is my current zpool status

pool: storage
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A

scan: resilvered 4.76G in 13h34m with 4 errors on Wed Sep 2 23:58:00 2015

config:

NAME STATE READ WRITE CKSUM
storage DEGRADED 0 0 6
raidz1-0 DEGRADED 0 0 12
gptid/1c3370b9-be89-11e1-9fc2-002215676300 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 1
3125056829283377702 UNAVAIL 0 0 0 was /dev/gptid/d16b77a7-751e-11e4-a7bd-002215676300
gptid/ad92ebdb-5126-11e5-a9c5-0007e9149a2a ONLINE 0 0 0
15038405692304958772 REMOVED 0 0 0 was /dev/gptid/454f9d38-5197-11e5-b0d3-0007e9149a2a
gptid/7509b1e3-4c1b-11e5-ae20-0007e9149a2a ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/1d78ecf2-be89-11e1-9fc2-002215676300 ONLINE 0 0 2
gptid/91cf5c7d-3562-11e5-bef0-0007e9149a2a ONLINE 0 0 0 block size: 512B configured, 4096B native
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Here is my current zpool status

pool: storage
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A

scan: resilvered 4.76G in 13h34m with 4 errors on Wed Sep 2 23:58:00 2015

config:

NAME STATE READ WRITE CKSUM
storage DEGRADED 0 0 6
raidz1-0 DEGRADED 0 0 12
gptid/1c3370b9-be89-11e1-9fc2-002215676300 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 1
3125056829283377702 UNAVAIL 0 0 0 was /dev/gptid/d16b77a7-751e-11e4-a7bd-002215676300
gptid/ad92ebdb-5126-11e5-a9c5-0007e9149a2a ONLINE 0 0 0
15038405692304958772 REMOVED 0 0 0 was /dev/gptid/454f9d38-5197-11e5-b0d3-0007e9149a2a
gptid/7509b1e3-4c1b-11e5-ae20-0007e9149a2a ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/1d78ecf2-be89-11e1-9fc2-002215676300 ONLINE 0 0 2
gptid/91cf5c7d-3562-11e5-bef0-0007e9149a2a ONLINE 0 0 0 block size: 512B configured, 4096B native
Does that look like it kept the formatting? You need to do it again and use the code tags when posting. I told you this was important in my previous comment.

Even without that information things don't look good. You have multiple drive failures in a raid z1 pool. I suspect you will be restoring ask your data from backup if you have one.


Edit: actually I have no clue, I need to see that command with formatting. Are you sure you pulled the correct drive, did it fully finished resilvering?
 

Anthony E

Cadet
Joined
Sep 2, 2015
Messages
5
The pool had originally 5 drives.. 2 of them had unreadable sectors.. When i used used the esata to replace the drive i never removed the drive to the pool. reason is, with the 5 drives if it had issues reading from 1 of the 2 drives it would be able to resilver correctly.. So when the resilver completed.. i removed 1 bad drive but the pool never removed it from itself.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Did you offline the drive before removing it?
 

Anthony E

Cadet
Joined
Sep 2, 2015
Messages
5
Didn't offline the drive, cause was trying to use it for the parity to resilver the new disk

Originally one drive his smart critial, the other drive detected 800+ unreadable sectors and the last drive reported 27 unreadable sectors.. Replaced 1st drive no issue. During resilver the 2nd drive unreadable kept growing. Then set new drive to replace the large error drive.. thats when the issue started.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
No clue what to tell you, did you follow the steps in the manual? Sounds like you still need to offline the old drive.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Did you resolve this issue?
 
Status
Not open for further replies.
Top