Imported a degraded pool - can't access it

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
Previous Post - Here

Previously had some issues with either a bad cable, bad HBA, or bad IOM on my shelf.

Dell R720xd
2x E5-2670
384gb RAM
LSI SAS-9207-8e
NetApp DS4246 (2x)
FreeNAS 11.1-U7 AND 11.2-U7
36 disk pool, 3x vdevs, raidz3

I was (still am) having numerous GPT errors with corrupted GPTs.

After reading up on Multipath configurations, this is not something I'd done previously, and somehow it was doing that automatically when I swapped in my new HBA. In addition, it was creating multipath disks that had multiple physical disks as part of one disk. I did a gmultipath destroy on all the disks that showed up, and lo and behold a pool was visible to be imported!

I ran zpool import -f tank

and received:

Code:
root@freenas[/dev]# zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 0 days 11:18:07 with 0 errors on Sun Sep 29 08:18:17 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            DEGRADED     0     0     0
          raidz3-0                                      DEGRADED     0     0     0
            gptid/e6f85169-5136-11e8-b376-bc305bf48148  ONLINE       0     0     0
            731087118901044337                          UNAVAIL      0     0     0  was /dev/gptid/f2bd171b-5136-11e8-b376-bc305bf48148
            gptid/181f4f1a-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/24da255f-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/325a8e49-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/4a4df2c1-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/6a52ee8d-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/7bc6a747-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/8d306d6b-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/a82f0ce2-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/bb5df1ae-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
            gptid/db7bb774-5137-11e8-b376-bc305bf48148  ONLINE       0     0     0
          raidz3-1                                      DEGRADED     0     0     0
            gptid/70a9ffa7-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/7485f02d-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/787153df-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/7c4f172c-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/8148a83c-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            17309881899809254171                        UNAVAIL      0     0     0  was /dev/gptid/864aa9e8-96d0-11e8-a512-bc305bf48148
            gptid/8a5bc22b-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/8ed0335e-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/92c028a5-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/9793ea3b-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/9c71d36c-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
            gptid/a1766b2e-96d0-11e8-a512-bc305bf48148  ONLINE       0     0     0
          raidz3-2                                      ONLINE       0     0     0
            gptid/bc7a19f9-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/bd626022-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/bef543fe-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c0e17dc6-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c27fc9e2-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c365fdbb-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c46b5d52-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c60d3684-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c7a6bc30-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/c9416b43-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/ca343c33-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0
            gptid/cb3f2f2c-c4cb-11e8-b538-bc305bf48148  ONLINE       0     0     0

errors: No known data errors


My volume isn't mounted, the GUI shows no pools/volumes (some of my old snapshots show up though). I have a spare disk for raidz3-0, and can get one (or use a larger disk) for raidz3-1.

Questions:

1) Do I simply just run zpool replace da33 da<newdisk> and wait? The manual has steps for replacing a failed disk in the GUI, but I don't have a pool in my GUI.
2) Can I mount the volume while the pool is in its current degraded state?
3) If I replace both failed disks, anything else I need to do?
4) can I disable multipathing? at one point when deleting multipaths, I did a disk rescan, and it recreated multipaths.

I intend to wipe this pool after getting it online, just treating this experience as a big learning opportunity since i've never had to deal with a failed disk on freenas. I'm still not entirely certain why I had problems in the first place but my guess is something with my old HBA, shelf, IOM, or a cable. And then when using the new HBA somehow it decided to use multipath.

Cheers and thanks for the help!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I don't have a pool in my GUI.
Use zpool export tank then import it in the GUI and go from there.
Can I mount the volume while the pool is in its current degraded state?
Yes, that should work.
If I replace both failed disks, anything else I need to do?
Pool should be healthy again... reassess if your pool design suits your needs and continue as normal if it does.
can I disable multipathing?
I tried to do this myself at one point, I think with preinit command to unload the multipath kernel module (kldunload geom_multipath.ko)... I think it was working at one point, but sometimes would catch a new disk and load anyway or something. maybe somebody else has a better idea. I can confirm that I'm still running that preinit command on my main server and it has caused no ill effect to my knowledge, so worst case you can try without too much worry.
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
so my pool does not show up when I try to import via the GUI. There's nothing listed, nothing in the dropdown, and I can't type anything in the pool name spot.

The legacy UI errors out (timeout) when trying to import a volume.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
so my pool does not show up when I try to import via the GUI. There's nothing listed, nothing in the dropdown, and I can't type anything in the pool name spot.

Did you export it first at the CLI?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
And can you import it again from the CLI? (or does it at least appear in the list from zpool import ?)
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
yup, it imported successfully in the same degraded state. But still don't see it in the GUI.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you import in the CLI, you shouldn't expect to see it in the GUI.

Normally exporting a pool makes it available for the GUI to import and the middleware will do the right things to mount it, etc.

Without that, you are going to need to either set the mountpoint (altroot) or use a manual mount command (zfs mount).

Maybe best to start by looking at zpool get altroot tank and see if it is already mounted there.
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
I'll edit the OP as well. My data is there, and it appears I can access it - however it didn't mount the volume in the normal manner, and the GUI doesn't seem to know what to do with it.

if I look on /, i see /tank and inside /tank is all my data. I discovered this by doing a zfs list. everything is in there. That's nice and I'm close, I'd still like to understand why the GUI can't see it. I am going to wipe this anyways because I don't want to have a 3 vdev pool anymore, and I'll restore from backup but still enjoying this as a good learning opportunity.
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
If you import in the CLI, you shouldn't expect to see it in the GUI.

Normally exporting a pool makes it available for the GUI to import and the middleware will do the right things to mount it, etc.

Without that, you are going to need to either set the mountpoint (altroot) or use a manual mount command (zfs mount).

Maybe best to start by looking at zpool get altroot tank and see if it is already mounted there.

Code:
root@freenas[/tank]# zpool get altroot tank
NAME  PROPERTY  VALUE    SOURCE
tank  altroot   -        default
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Perhaps that it's missing the altroot is enough to make the GUI miss it?

If you use zpool set altroot="/mnt" tank can you get it to update?

Then export in CLI and import in GUI?
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
Perhaps that it's missing the altroot is enough to make the GUI miss it?

If you use zpool set altroot="/mnt" tank can you get it to update?

Then export in CLI and import in GUI?

from a order of operations point of view should I set that before or after exporting the pool and reimporting it?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
from a order of operations point of view should I set that before or after exporting the pool and reimporting it?
I'm not 100% sure, but I think you can't perform actions on a pool with zpool unless it's imported already, so I think that dictates the order of operations as I specified, but I did think it would make more sense to export first if that would work... perhaps give it a try... I'm not near my test system to try it for you right now.
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
I'm not 100% sure, but I think you can't perform actions on a pool with zpool unless it's imported already, so I think that dictates the order of operations as I specified, but I did think it would make more sense to export first if that would work... perhaps give it a try... I'm not near my test system to try it for you right now.

well this answers that:

root@freenas[/]# zpool set altroot="/mnt" tank cannot set property for 'tank': property 'altroot' can only be set during pool creation or import

I couldn't export the pool it was locked/in use. I did a reboot and the pool was already imported however i could not access it. I successfully exported the pool and reimported it using zpool import -R /mnt tank so it does show up under /mnt now and I can access files on it. The GUI however has no idea about it.

I re-exported and tried to import via GUI again, and it does not see any pools available to import (zpool import lists the degraded tank pool just fine)
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
SUCCESS!!

I've been alternating between 11.1 U7 and 11.2 U7 (the pool was originally built on 11.1 u7 but had wanted to upgrade anyways and after having no luck on 11.1 thought I'd try 11.2 to see if it had any better luck.

Tonight went back over to the 11.1 install, had to delete the multipathed disks that it created again, and imported via the GUI and I can see it all and it mounted in /mnt/ and manage it via the GUI.

Good stuff. Will try the disk replacement next. Now that I have it restored it'll save me some time having to copy off my backup server.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
it'll save me some time having to copy off my backup server.
this is a beautiful statement that doesn't happen often enough in the forums....
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
this is a beautiful statement that doesn't happen often enough in the forums....

Yes...backups are a wonderful thing. I work in IT (but not as a storage guy) so I figured no reason to do it differently at home!

I'm still a little confused at HOW all of this happened, multipathing seems to really have been the major culprit, and I'm not sure I understand why. All my disks still have corrupted GPT's (but only one of the two) so something definitely happened.

I appreciate everyones help over the past few months to get this resolved!
 
Top