PaleoN is in a different timezone, so I'm not sure when he'll be online. I think he still has a few more things for you to try.
I'm kind of wondering which device it thinks is unavailable, or why.
time zpool import -fFX storage
Very long. I wasn't accounting for vfs.zfs.recover being set to 1.The first command took a long time...
I have a few more. Including one that may show us which it thinks is unavailable. A reboot is likely in order to reset vfs.zfs.recover to 0 and not have imports take 9 hours.I think he still has a few more things for you to try.
I'm kind of wondering which device it thinks is unavailable, or why.
I'd wait on that one.I meant this command:
Code:time zpool import -fFX storage
Statements such as these me want to wait until later:I'd wait on that one.
As a final desperation move, or after you've managed to import the pool readonly and copy everything off you can try:
# zpool import -R /mnt -FX poolname
The above command has a good chance of grenading your pool, but it also has a chance of importing it.
That's probably the safer way to go. In addition I could use another few hours myself. I was just checking in.So I will have to reboot now and do nothing for now?
I don't know if I can access it by SSH after reboot though. As mentioned I am at work. I did a quick configuration in regarding to forwarding the correct port...
The -X and -T commands are related, -T tells it to go back to a specific point in time, hopefully a time when things weren't damaged. The problem could be that ZFS doesn't always store files across all of the disks in your pool, so there might be some files it can't rebuild because it thinks one of the disks is missing. I think we need to really try to figure out what changed, and when, and why it thinks a disk is missing.
That's probably the safer way to go. In addition I could use another few hours myself. I was just checking in.
Can you remember specifically when you replaced it?
Also, can you remember the last time you did a scrub, exactly?
I'm not sure how, but those answers might be helpful.
I think I'm going to retire for the night. I have trouble with my neck and it's VERY painful. Spending time at the computer makes it worse....![]()
IMHO, scrubs every 5 days is excessive. In fact with higher parity levels you can argue that you can scrub less often. The statistical likelihood of having an unrecoverable read on 3 separate disks for the same block of ZFS data is... unlikely to say the least. Other failure modes begin to dominate.
I would still scrub at least once a month and regularly run some long SMART tests to check for read errors on the drives. The SMART tests are faster, check any unused sectors on the drive and are less stressful on the drives.
It was the 4[sup]th[/sup], 2012-12-04.07:43:18 zpool replace storage 6201553240551106299 gptid/3dc2f956-3de6-11e2-8af1-00151736994a.Looking at this thread, which I created I would say the drive was replaced on the 8th of December 2012.
Last scrub was the 23[sup]rd[/sup], 2013-02-23.23:23:34 zpool scrub storage.Last srub would be a guess. I am guessing the last scrub was between 20 or 30 days. A new scrub was due one of these days.
zpool import -V storage
zpool status -v
I'm still here. Watching this thread and only getting more and more confused. The zpool seems to be convinced that another device should exist. Have you ever had a ZIL or L2ARC?
Also, I'm impressed that you haven't gone off doing things you shouldn't do. This is by far the longest I've seen an OP go without deciding to doing something stupid despite everyone telling you not to do anything without direction. It's also the most troubleshooting I think I've ever seen for someone's data. I'm sure the two are related.
I'm really hoping we can figure out what is going on and get you your data back. I'm not sure if you're being patient because you know your wife will kill you if she figures out how bad things are right now, or if you are that "cool headed". In any case, keep up the good work. I'm rooting for you!
It was the 4[sup]th[/sup], 2012-12-04.07:43:18 zpool replace storage 6201553240551106299 gptid/3dc2f956-3de6-11e2-8af1-00151736994a.
Last scrub was the 23[sup]rd[/sup], 2013-02-23.23:23:34 zpool scrub storage.
Boot up mfsBSD normally and run:If that "imports", given the other error messages, the pool will most likely be considered faulted. Which also means you can't actually do anything with it.Code:zpool import -V storageOught to show us what devices it considers missing.Code:zpool status -v
root@mfsbsd:/root # zpool import -V storage root@mfsbsd:/root #
root@mfsbsd:/root # zpool status -v
pool: storage
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:
NAME STATE READ WRITE CKSUM
storage FAULTED 0 0 2
raidz2-0 ONLINE 0 0 8
gptid/19177fb9-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/3dc2f956-3de6-11e2-8af1-00151736994a ONLINE 0 0 0
gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
root@mfsbsd:/root #
root@mfsbsd:/root # zpool status
pool: storage
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:
NAME STATE READ WRITE CKSUM
storage FAULTED 0 0 2
raidz2-0 ONLINE 0 0 8
gptid/19177fb9-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/3dc2f956-3de6-11e2-8af1-00151736994a ONLINE 0 0 0
gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a ONLINE 0 0 0
root@mfsbsd:/root #
[root@freenas] ~# zpool status no pools available
Yes, the zpool import -V is a different command. It is showing pool metadata corruption which isn't good. Let's try another -T import:Does this mean anything?!
zpool export storage zpool import -T 732362 -o rdonly=on storage
Yes, the zpool import -V is a different command. It is showing pool metadata corruption which isn't good. Let's try another -T import:Code:zpool export storage zpool import -T 732362 -o rdonly=on storage
root@mfsbsd:/root # zpool export storage root@mfsbsd:/root # zpool import -T 732362 -o rdonly=on storage cannot import 'storage': one or more devices is currently unavailable
OK, try:Code:root@mfsbsd:/root # zpool export storage root@mfsbsd:/root # zpool import -T 732362 -o rdonly=on storage cannot import 'storage': one or more devices is currently unavailable
zpool import -FT 732362 -o rdonly=on storage
root@mfsbsd:/root # zpool import -FT 732362 -o rdonly=on storage cannot import 'storage': one or more devices is currently unavailable
It's not. The error message could be improved.This cannot be related in anyway that we are using a different FreeBSD right? In contrary to the FreeNAS usb stick. Because it's always complaining about "one or more devices is currently unavailable"...?
set vfs.zfs.recover=1 set vfs.zfs.debug=1 boot
time zpool import -FXT 732362 storage
It's not. The error message could be improved.
Reboot mfsBSD:Code:set vfs.zfs.recover=1 set vfs.zfs.debug=1 boot
Try:This will likely take hours again.Code:time zpool import -FXT 732362 storage
cannot import 'storage': one or more devices is currently unavailable
root@mfsbsd:/root # time zpool import -FXT 732362 storage cannot import 'storage': one or more devices is currently unavailable 0.007u 3.797s 0:06.76 56.0% 112+2677k 240+0io 91pf+0w
root@mfsbsd:/root # time zpool import -FXT 732362 storage cannot import 'storage': one or more devices is currently unavailable 0.000u 3.309s 0:05.78 57.0% 111+2649k 240+0io 0pf+0w