zpool status unknown

Status
Not open for further replies.

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
i have a raidz2 with 6 hard drives

after 2+ years the system became degraded, but still operating fully

only drives showing were 0 1 2 and 4

i powered down the freenas box unpluged all the drives one at a time to reseat the connections

when i powered back up i had drives 0 1 2 3 and 4 however, my zpool was gone (unknown)

am i screwed? or is there any hope of getting my zpool back?
 

Attachments

  • 192.168.1.200-.jpg
    192.168.1.200-.jpg
    57.6 KB · Views: 314
Last edited:

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
post zpool status
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
Shell
[root@freenas ~]# zpool status
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h7m with 0 errors on Sat May 7 03:53:36 2016
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
gptid/f64a23c9-33a3-11e5-a339-60eb69340fa3 ONLINE 0 0 0

errors: No known data errors
[root@freenas ~]#

and this is the status with my zpool name tank

[root@freenas ~]# zpool status tank
cannot open 'tank': no such pool
[root@freenas ~]#
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
post the output of - zpool import
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
thank you ethereal, for your interest and kindness..

[root@freenas ~]# zpool import
pool: tank
id: 5610617844076698952
state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: http://illumos.org/msg/ZFS-8000-2Q
config:

tank DEGRADED
raidz2-0 DEGRADED
gptid/7ea641e5-2398-11e4-b424-10c37b949d82 ONLINE
gptid/7f0871b3-2398-11e4-b424-10c37b949d82 ONLINE
gptid/7f6b618c-2398-11e4-b424-10c37b949d82 ONLINE
gptid/7fcfaad8-2398-11e4-b424-10c37b949d82 ONLINE
gptid/80883a33-2398-11e4-b424-10c37b949d82 ONLINE
9067545562440134706 UNAVAIL cannot open
[root@freenas ~]#
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
in the gui go to - storage - volumes - hit import volume.

does this allow you to import the volume ?
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
this is what it says..
 

Attachments

  • 192.168.1.200- (1).jpg
    192.168.1.200- (1).jpg
    65.9 KB · Views: 326

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
zpool import tank tank2

try the above this should import the tank pool with the new name tank2.

i should warn you that i've never had such a problem and i've never manually imported a pool. i think because you have raidz2 with 0nly 1 drive missing this will allow you to get to your data.

but because the import is done in the command line and not the gui there is the possibility that the gui will not see tank2. but you may still be able to get your data off
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
[root@freenas ~]# zpool import tank tank2
cannot import 'tank' as 'tank2': I/O error
Destroy and re-create the pool from
a backup source.
[root@freenas ~]#
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
i don't have a clue i'm afraid - maybe an adult will help us.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
post output of

camcontrol devlist

this might help
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
[root@freenas ~]# camcontrol devlist
<WDC WD20EFRX-68EUZN0 80.00A80> at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD20EFRX-68EUZN0 80.00A80> at scbus1 target 0 lun 0 (pass1,ada1)
<ST32000542AS CC95> at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD20EFRX-68EUZN0 80.00A80> at scbus3 target 0 lun 0 (pass3,ada3)
<WDC WD20EFRX-68EUZN0 80.00A80> at scbus4 target 0 lun 0 (pass4,ada4)
<Corsair Voyager 1100> at scbus6 target 0 lun 0 (pass5,da0)
[root@freenas ~]#
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
this was the last email i got when tank was degraded but running and only 4 drives showed in the gui

Checking status of zfs pools:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 7.44G 535M 6.92G - - 7% 1.00x ONLINE -
tank 10.9T 4.99T 5.88T - 10% 45% 1.00x DEGRADED /mnt

pool: tank
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 10h40m with 0 errors on Sun Apr 24 10:40:22 2016
config:

NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
4353104186168628848 REMOVED 0 0 0 was /dev/gptid/7ea641e5-2398-11e4-b424-10c37b949d82
gptid/7f0871b3-2398-11e4-b424-10c37b949d82 ONLINE 0 0 0
gptid/7f6b618c-2398-11e4-b424-10c37b949d82 ONLINE 0 0 0
4406861119315121775 REMOVED 0 0 0 was /dev/gptid/7fcfaad8-2398-11e4-b424-10c37b949d82
gptid/80883a33-2398-11e4-b424-10c37b949d82 ONLINE 0 0 0
gptid/814b9861-2398-11e4-b424-10c37b949d82 ONLINE 0 0 0

errors: No known data errors

-- End of daily output --
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
after i powered down, reseated connections, powered up is when i lost tank but got 1 of the 2 lost drives back
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
have you been running smart tests on the drives ?
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
this is my smart test setup, its been a while but i think i copied it from cyberjock
 

Attachments

  • jpg.jpg
    jpg.jpg
    15.3 KB · Views: 319

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
post in code tags please -

smartctl -a /dev/ada0

smartctl -a /dev/ada1

smartctl -a /dev/ada2

smartctl -a /dev/ada3

smartctl -a /dev/ada4
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
i ran all the tests but the shell cuts off the top of the tests and i could only see the bottom portion
 

Wade

Contributor
Joined
Feb 16, 2014
Messages
110
i changed the resolution of shell, is this what you need for "smartctl -a /dev/ada0" ?

.

After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT

Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 00 00:04:40.234 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 00 00:04:40.234 IDENTIFY DEVICE

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA
_of_first_error
# 1 Short offline Completed without error 00% 16495 -
# 2 Extended offline Completed without error 00% 16330 -
# 3 Short offline Completed without error 00% 16156 -
# 4 Short offline Completed without error 00% 15989 -
# 5 Short offline Completed without error 00% 15821 -
# 6 Extended offline Completed without error 00% 15658 -
# 7 Short offline Completed without error 00% 15317 -
# 8 Short offline Completed without error 00% 15149 -
# 9 Short offline Completed without error 00% 14981 -
#10 Extended offline Completed without error 00% 14819 -
#11 Short offline Completed without error 00% 14646 -
#12 Short offline Completed without error 00% 14479 -
#13 Short offline Completed without error 00% 14310 -
#14 Extended offline Completed without error 00% 14147 -
#15 Short offline Completed without error 00% 13806 -
#16 Short offline Completed without error 00% 13638 -
#17 Short offline Completed without error 00% 13477 -
#18 Extended offline Completed without error 00% 13316 -
#19 Short offline Completed without error 00% 13141 -
#20 Short offline Completed without error 00% 12973 -
#21 Short offline Completed without error 00% 12805 -

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

[root@freenas ~]#
 
Status
Not open for further replies.
Top