Pool missing after upgrading to 11.1-U5

Status
Not open for further replies.

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
Hi, I have the same problem [mod note: this thread was split off from here] every time I upgrade, and I am running on a standard iXsystem Freenas midi box where everything came from them. Nothing has been changed on the hardware side.

This is what I get after tried to upgrade to 11.1 - U5 just now:

camcontrol devlist
2.5" SATA SSD 3MG2-P M150821> at scbus0 target 0 lun 0 (pass0,ada0)
<2.5" SATA SSD 3MG2-P M150821> at scbus1 target 0 lun 0 (pass1,ada1)
<16GB SATA Flash Drive SFDK002A> at scbus2 target 0 lun 0 (pass2,ada2)
<Marvell Console 1.01> at scbus7 target 0 lun 0 (pass3)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus8 target 0 lun 0 (pass4,ada3)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus9 target 0 lun 0 (pass5,ada4)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus10 target 0 lun 0 (pass6,ada5)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus11 target 0 lun 0 (pass7,ada6)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus12 target 0 lun 0 (pass8,ada7)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus13 target 0 lun 0 (pass9,ada8)

zpool status
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:56 with 0 errors on Mon May 28 03:45:56 2018
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
ada2p2 ONLINE 0 0 0

errors: No known data errors

zdb -e -C ttttt
zdb: can't open 'ttttt': No such file or directory


Sent from my iPad using Tapatalk
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
Why are you trying to look at, 'ttttt' ???

I guess because I’m a freenas/freebsd noob just trying to replicate the things mentioned earlier in the conversation...?

Also found many of these in the messages log:
Code:
May 31 22:15:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:16:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:17:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:18:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:19:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:20:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:21:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:22:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:23:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #1
May 31 22:24:08 nas /autosnap.py: [tools.autosnap:259] Volume main not imported, skipping snapshot task #




Sent from my iPad using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Did you try zpool import at the command line?
 

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
I’m counting 8 physical drives. However I don’t remember exactly the partitioning layout...
Code:
gpart status
Name  Status  Components
ada0p1      OK  ada0
ada1p1      OK  ada1
ada2p1      OK  ada2
ada2p2      OK  ada2
ada3p1      OK  ada3
ada3p2      OK  ada3
ada4p1      OK  ada4
ada4p2      OK  ada4
ada5p1      OK  ada5
ada5p2      OK  ada5
ada6p1      OK  ada6
ada6p2      OK  ada6
ada7p1      OK  ada7
ada7p2      OK  ada7
ada8p1      OK  ada8
ada8p2      OK  ada8
$ 



Sent from my iPad using Tapatalk
 

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
Did you try zpool import at the command line?
Yes, I did try that and got:
Code:
 sudo zpool import
   pool: main
     id: 9738258876455951262
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
 config:

        main                                            UNAVAIL  insufficient replicas
          raidz1-0                                      UNAVAIL  insufficient replicas
            9115302506783959767                         UNAVAIL  cannot open
            16565963235939024949                        UNAVAIL  cannot open
            gptid/241a3b9d-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/251aef0e-e344-11e7-8a30-d05099c375f2  ONLINE
          raidz1-1                                      ONLINE
            gptid/26365ca5-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/273576d8-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/284038af-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/294d4f36-e344-11e7-8a30-d05099c375f2  ONLINE
        logs
          gptid/bce0a0ce-e344-11e7-8a30-d05099c375f2    ONLINE
$ 


Which does not make a lot of sense. All the drives are there and nothing has been touched between 11.1-U4 and 11.1-U5. The fact that I upgrade cannot make 2 drive physically disconnect from the server?


Sent from my iPad using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I’m counting 8 physical drives.
Your camcontrol devlist showed 6 and 2 SSD drives but I wasn't talking about what the OS sees, I was talking about what you see physically attached to the computer.
The usual reason for a pool to not import is that too many member disks are not available. I am trying to compare the physical disks to the disks that the OS is detecting.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Which does not make a lot of sense. All the drives are there and nothing has been touched between 11.1-U4 and 11.1-U5. The fact that I upgrade cannot make 2 drive physically disconnect from the server?
The server reboots when you upgrade. The time that drives usually fail is on a reboot.
Your only hope at this point is that you can shutdown the NAS, and then power back up and the missing drives (at least one of them) is detected to get your pool back in operation.
Right now, you have a RAIDz1 vdev with two missing members. That is a dead pool if the drives don't magically come back on their own.
 

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
Did you try zpool import at the command line?
Yes, I did try that and got:
Code:
$ zpool import
   pool: main
     id: 9738258876455951262
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
 config:

        main                                            UNAVAIL  insufficient replicas
          raidz1-0                                      UNAVAIL  insufficient replicas
            9115302506783959767                         UNAVAIL  cannot open
            16565963235939024949                        UNAVAIL  cannot open
            gptid/241a3b9d-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/251aef0e-e344-11e7-8a30-d05099c375f2  ONLINE
          raidz1-1                                      ONLINE
            gptid/26365ca5-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/273576d8-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/284038af-e344-11e7-8a30-d05099c375f2  ONLINE
            gptid/294d4f36-e344-11e7-8a30-d05099c375f2  ONLINE
        logs
          gptid/bce0a0ce-e344-11e7-8a30-d05099c375f2    ONLINE
$ 

Which does not make much sense. The fact that I upgrade from 11.1-U4 to 11.1-U5 cannot make 2 drives physically disconnect from the system.


Sent from my iPad using Tapatalk
 

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
The server reboots when you upgrade. The time that drives usually fail is on a reboot.
Your only hope at this point is that you can shutdown the NAS, and then power back up and the missing drives (at least one of them) is detected to get your pool back in operation.
Right now, you have a RAIDz1 vdev with two missing members. That is a dead pool if the drives don't magically come back on their own.
The strange thing is that this has happened before when I upgrade, and the drives always comes back if I just leave the server running for a while.... again, does not make a lot of sense...


Sent from my iPad using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Happy news if they just come back. Did they come back yet?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

mtaksrud

Cadet
Joined
May 31, 2018
Messages
7
Happy news if they just come back. Did they come back yet?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
Yes! Right after my last message :)
I turned the server of, pulled out and reconnected all the drives, and when I restarted everything was fine.
There must have been a bad connection somewhere that was not detected util I rebooted ... but again it is a bit strange because the server has been off for a while so I just started it today with the 11.1-U4 version and everything was fine. As soon as I upgraded to 11.1-U5 I got this drive error. Anyway,... everything seems to be working now... for some reason:)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That almost sounds like someone set the drives for staggered spinup but something went awry.
 
Status
Not open for further replies.
Top