TrueNAS12.0-U6.1 Pool offline

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
Well bummer. Pool is still offline :(. So I am guessing I hit the end of the road for this.

Should I know just disconnect it from the GUI I guess and start over?

I guess the snapshot is worthless?

Also I have been running 3 FreeNas servers for years and never had this happen. I have had many drives fail or a piece of hardware and it degrade and I replaced drives instantly resliver and good to go.
Do you have any advice so not to repeat this again? There was no warning signs on this and it just went poof.

Thanks again for all the input and help.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Actually - run zpool status -v and post in code tags please
Also zpool import and post in code tags please
 
Last edited:

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
Code:
the configuration database and will be reset on reboot.

root@truenas[~]# clear
root@truenas[~]# zpool status -v
  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:01:45 with 0 errors on Tue Apr 12 03:46:45 2022
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors


and

Code:
root@truenas[~]# zpool import
   pool: boot-pool
     id: 13527221736826440511
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        boot-pool   ONLINE
          nvd0p2    ONLINE

   pool: MainNAS
     id: 3498190718808128795
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        MainNAS                                         ONLINE
          raidz2-0                                      ONLINE
            gptid/156864e2-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/15d02660-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/15bb2646-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/15e4ae7c-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/15f8f663-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/16430264-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/162834a9-832a-11ec-8e9c-a85e4550970d  ONLINE
            gptid/16837624-832a-11ec-8e9c-a85e4550970d  ONLINE
root@truenas[~]#

 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
trye zpool import MainNAS again - and watch carefully for any messages
then try zpool import -a
If that doesn't work try zpool import -f MainNAS

Beyond that - I am stumped
 

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
Here is what is said
Code:
root@truenas[~]# zpool import -a
cannot import 'boot-pool': pool was previously in use from another system.
Last accessed by <unknown> (hostid=0) at Mon Apr 11 22:09:41 2022
The pool can be imported, use 'zpool import -f' to import the pool.
root@truenas[~]# zpool import -f MainNAS
cannot import 'MainNAS': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
root@truenas[~]# root@truenas[~]# zpool import MainNAS
root@truenas[~]# zpool import -a
cannot import 'boot-pool': pool was previously in use from another system.
Last accessed by <unknown> (hostid=0) at Mon Apr 11 22:09:41 2022
The pool can be imported, use 'zpool import -f' to import the pool.
root@truenas[~]# zpool import -f MainNAS
cannot import 'MainNAS': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name



Sigh. I do appreciate all your effort you gave.

Do you think I should try to give it a new name?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
It says that a pool exists called MainNAS
Are you sure the pool doesn't exist somewhere?

You could try zpool export -f MainNAS and then import with zpool import -f MainNAS
 
Last edited:

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
Unfortunately didn't help.

Is there a place I might not be looking right?

Also I am 99.9% positive answer is no, but do you think upgrading to newest version U8 might help?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
At this point I don't see that it could do much harm.
Assuming the share and permissions config is simple then install to a USB flash drive and just import the pool and see if it turns up
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Everything looks as though it should / is working - except it isn't. Rebuild the NAS to a flash drive (on a temp basis, making sure its a good drive) and removing your existing boot drive would seem a sensinble Hail Mary move
 

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
So when booting the NAS I found it to say this:

vdev tree has 1 missing top level vdevs.
current settings allow a maximum of 0 missing top-level vdevs at this

I know that can't be good :(
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Are we back to your disks again?
Have you removed a disk/vdev at some point? svdev, slog or similar
Your old boot pool says da0, but you tell me you boot from nvme - that doesn't add up - something isn't adding up (and it may be irrelevent)
[Of course at this point you may be booting from something else entirely]

However we do have a meaningful error - your pool is apparently (and allegedly) missing a vdev - so what vdev have you removed [and I am fully expecting "I haven't removed any vdev"]

I think I have run out of ideas

I can't remember - do you have a backup?

You MIGHT still be able to import in RO mode. "zpool import -o readonly=on MainNAS" - this is the last idea I have at this point.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
vdev tree has 1 missing top level vdevs.
current settings allow a maximum of 0 missing top-level vdevs at this
Yeah, this is a potential mess if you've accidentally added a single device as a top-level vdev.

zdb -C -U /data/zfs/zpool.cache

and

zpool history

please.
 

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
Yeah, this is a potential mess if you've accidentally added a single device as a top-level vdev.

zdb -C -U /data/zfs/zpool.cache

and

zpool history

please.
Here is ZDB
Code:
root@truenas[~]# zdb -C -U /data/zfs/zpool.cache
TrueServer:
    version: 5000
    name: 'TrueServer'
    state: 0
    txg: 163488
    pool_guid: 1603798630106216270
    errata: 0
    hostid: 2601446544
    hostname: ''
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 1603798630106216270
        create_txg: 4
        children[0]:
            type: 'raidz'
            id: 0
            guid: 8614971072785792033
            nparity: 2
            metaslab_array: 256
            metaslab_shift: 34
            ashift: 12
            asize: 95983889285120
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 129
            children[0]:
                type: 'disk'
                id: 0
                guid: 16920431341407029552
                path: '/dev/gptid/b0304944-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 130
            children[1]:
                type: 'disk'
                id: 1
                guid: 17460668793511436470
                path: '/dev/gptid/b06d275a-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 131
            children[2]:
                type: 'disk'
                id: 2
                guid: 8033773144212940751
                path: '/dev/gptid/b05b8091-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 132
            children[3]:
                type: 'disk'
                id: 3
                guid: 18169469800793595977
                path: '/dev/gptid/b0adc118-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 133
            children[4]:
                type: 'disk'
                id: 4
                guid: 7341022465625016651
                path: '/dev/gptid/b143ac85-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 134
            children[5]:
                type: 'disk'
                id: 5
                guid: 468484581845318682
                path: '/dev/gptid/b1698055-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 135
            children[6]:
                type: 'disk'
                id: 6
                guid: 8679953526941588111
                path: '/dev/gptid/b18125bc-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 136
            children[7]:
                type: 'disk'
                id: 7
                guid: 18445334097904060286
                path: '/dev/gptid/b18d7d07-517d-11ec-8d3f-a85e4550970d'
                create_txg: 4
                com.delphix:vdev_zap_leaf: 137
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
root@truenas[~]#


Here is History
Code:

root@truenas[~]# zpool history
History for 'boot-pool':
2021-11-29.17:47:46 zpool create -f -o cachefile=/tmp/zpool.cache -O mountpoint=none -O atime=off -O canmount=off boot-pool da0p2
2021-11-29.17:47:46 zfs set compression=on boot-pool
2021-11-29.17:47:47 zfs create -o canmount=off boot-pool/ROOT
2021-11-29.17:47:52 zfs create -o mountpoint=legacy boot-pool/ROOT/default
2021-11-29.17:57:08 zpool set bootfs=boot-pool/ROOT/default boot-pool
2021-11-29.18:35:18 zfs set beadm:nickname=default boot-pool/ROOT/default
2021-11-29.18:35:18 zfs snapshot -r boot-pool/ROOT/default@2021-11-30-00:35:17
2021-11-29.18:35:18 zfs clone -o canmount=off -o mountpoint=legacy boot-pool/ROOT/default@2021-11-30-00:35:17 boot-pool/ROOT/Initial-Install
2021-11-29.18:35:24 zfs set beadm:keep=True boot-pool/ROOT/Initial-Install
2021-11-29.18:35:39  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-11-29.18:35:40  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-11-29.18:35:40  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-11-29.18:35:40  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-c8c2ac65efb5423aa839beddcf09baf6
2021-11-29.18:35:40  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-c8c2ac65efb5423aa839beddcf09baf6
2021-11-29.18:35:41  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-c8c2ac65efb5423aa839beddcf09baf6
2021-11-29.18:35:41  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-11-29.18:35:42  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-11-29.18:46:07 zfs destroy -r boot-pool/.system
2021-11-29.19:33:07  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-11-29.19:33:07  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-11-29.19:33:07  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-11-29.19:33:08  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-c8c2ac65efb5423aa839beddcf09baf6
2021-11-29.19:33:08  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-c8c2ac65efb5423aa839beddcf09baf6
2021-11-29.19:33:08  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-c8c2ac65efb5423aa839beddcf09baf6
2021-11-29.19:33:08  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-11-29.19:33:16  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-11-29.19:33:25  zfs snapshot  boot-pool/.system/samba4@wbc-1638236000
2021-11-29.19:35:00 zfs destroy -r boot-pool/.system
2021-11-30.16:04:07 zfs set org.freebsd.ioc:active=no boot-pool
2021-12-06.05:45:02  zpool scrub boot-pool
2021-12-09.12:02:33 zfs set beadm:nickname=Initial-Install boot-pool/ROOT/Initial-Install
2021-12-09.12:02:34 zfs snapshot -r boot-pool/ROOT/default@2021-12-09-12:02:33
2021-12-09.12:02:40 zfs clone -o canmount=off -o mountpoint=legacy boot-pool/ROOT/default@2021-12-09-12:02:33 boot-pool/ROOT/12.0-U7
2021-12-09.12:02:40 zfs set beadm:nickname=12.0-U7 boot-pool/ROOT/12.0-U7
2021-12-09.12:02:41  zfs set beadm:keep=False boot-pool/ROOT/12.0-U7
2021-12-09.12:02:42  zfs set sync=disabled boot-pool/ROOT/12.0-U7
2021-12-09.12:13:20  zfs inherit  boot-pool/ROOT/12.0-U7
2021-12-09.12:13:20 zfs set canmount=noauto boot-pool/ROOT/12.0-U7
2021-12-09.12:13:20 zfs set mountpoint=/tmp/BE-12.0-U7.KxQsKE2n boot-pool/ROOT/12.0-U7
2021-12-09.12:13:21 zfs set mountpoint=/ boot-pool/ROOT/12.0-U7
2021-12-09.12:13:22 zpool set bootfs=boot-pool/ROOT/12.0-U7 boot-pool
2021-12-09.12:13:22 zfs set canmount=noauto boot-pool/ROOT/Initial-Install
2021-12-09.12:13:22 zfs set canmount=noauto boot-pool/ROOT/default
2021-12-09.12:13:27 zfs promote boot-pool/ROOT/12.0-U7
2021-12-13.03:45:04  zpool scrub boot-pool
2021-12-20.03:45:04  zpool scrub boot-pool
2021-12-27.03:45:04  zpool scrub boot-pool
2022-01-03.03:45:04  zpool scrub boot-pool
2022-01-10.03:45:04  zpool scrub boot-pool
2022-01-17.03:45:04  zpool scrub boot-pool
2022-01-24.03:45:07  zpool scrub boot-pool
2022-01-31.03:45:04  zpool scrub boot-pool
2022-04-12.03:45:10  zpool scrub boot-pool

 

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
Are we back to your disks again?
Have you removed a disk/vdev at some point? svdev, slog or similar
Your old boot pool says da0, but you tell me you boot from nvme - that doesn't add up - something isn't adding up (and it may be irrelevent)
[Of course at this point you may be booting from something else entirely]

However we do have a meaningful error - your pool is apparently (and allegedly) missing a vdev - so what vdev have you removed [and I am fully expecting "I haven't removed any vdev"]

I think I have run out of ideas

I can't remember - do you have a backup?

You MIGHT still be able to import in RO mode. "zpool import -o readonly=on MainNAS" - this is the last idea I have at this point.
I did not....
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
The zpool history won't be much use unless @misterpele boots his old OS I suspect
 
Joined
Oct 22, 2019
Messages
3,641
When you say "it" shows offline, what is the "it" you are referring to?

Command-line output or the Dashboard GUI?

Have you rebooted at any point during these steps?

If not, what happens if you close the web browser, then SSH into your server and run:
sudo service middlewared restart

(No need for sudo if you are logged in as root.)
 

misterpele

Dabbler
Joined
Dec 4, 2014
Messages
36
When you say "it" shows offline, what is the "it" you are referring to?

Command-line output or the Dashboard GUI?

Have you rebooted at any point during these steps?

If not, what happens if you close the web browser, then SSH into your server and run:
sudo service middlewared restart

(No need for sudo if you are logged in as root.)
The IT I'm referring to is my Pool.

I am using putty so can command line, but can easily access gui.

As for rebooting yes, but not while doing any processes.
Code:
Stopping middlewared.
Waiting for PIDS: 528, 528.
 
Joined
Oct 22, 2019
Messages
3,641
The IT I'm referring to is my Pool.
I know this is about your pool. But you keep referring to "it" shows the pool as offline.

That's why I asked:

Command-line output or the Dashboard GUI?

Are you determining the pool to be offline because of what the GUI / Dashboard is displaying?

Your disks are all online. Apparently, the manual import (via the command-line) was successful; even subsequent attempts say that such a pool already exists.

Post #19 in this thread implies it was a success.

I ran that and it seemed to run well. I hit esc by accident so can't copy and paste what it said. but in the GUI in snapshots it now shows MainNAS.
 
Top