Reboot and pools offline

uberthoth

Dabbler
Joined
Mar 15, 2022
Messages
11
I recently made a new truenas server scale ( TrueNAS-SCALE-22.02.2.1 ), with two disks in a zfs mirror, and one ssd in a stripe. After rebooting all pools were offline. Clicking on 'import' in the GUI does not work as it does not see any pools.

Code:
root@truenas[~]# zpool status -v
  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:06 with 0 errors on Fri Jul 22 03:45:08 2022
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sdb3      ONLINE       0     0     0

errors: No known data errors


However, the pools are there:

Code:
zpool import -f
   pool: stripe
     id: 7923944345547623130
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        stripe                                  ONLINE
          bfe38e88-f884-4193-ad24-6384ee8b0582  ONLINE

   pool: big14
     id: 17782599154639173931
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        big14                                     ONLINE
          mirror-0                                ONLINE
            35d88e03-0407-4c84-a579-813bd4567316  ONLINE
            b49ea79f-c737-4fde-8156-93e7f3c1739b  ONLINE


importing them by name individually does work:

Code:
root@truenas[~]# zpool import big14
root@truenas[~]# zpool import stripe
root@truenas[~]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
udev                                                       3.7G     0  3.7G   0% /dev
tmpfs                                                      780M  9.3M  771M   2% /run
boot-pool/ROOT/22.02.2.1                                   229G  2.7G  227G   2% /
tmpfs                                                      3.9G   92K  3.9G   1% /dev/shm
tmpfs                                                      100M     0  100M   0% /run/lock
tmpfs                                                      4.0M     0  4.0M   0% /sys/fs/cgroup
tmpfs                                                      3.9G  5.0M  3.9G   1% /tmp
boot-pool/grub                                             227G  8.3M  227G   1% /boot/grub
big14                                                       13T  2.1T   11T  17% /big14
stripe                                                     1.8T  7.9G  1.8T   1% /stripe
stripe/ix-applications                                     1.8T  128K  1.8T   1% /stripe/ix-applications
stripe/ix-applications/default_volumes                     1.8T  128K  1.8T   1% /stripe/ix-applications/default_volumes
stripe/ix-applications/k3s                                 1.8T  121M  1.8T   1% /stripe/ix-applications/k3s
stripe/ix-applications/releases                            1.8T  128K  1.8T   1% /stripe/ix-applications/releases
stripe/ix-applications/catalogs                            1.8T  5.9M  1.8T   1% /stripe/ix-applications/catalogs
stripe/ix-applications/docker                              1.8T   21M  1.8T   1% /stripe/ix-applications/docker
stripe/ix-applications/releases/pihole                     1.8T  128K  1.8T   1% /stripe/ix-applications/releases/pihole
stripe/ix-applications/releases/pihole/charts              1.8T  256K  1.8T   1% /stripe/ix-applications/releases/pihole/charts
stripe/ix-applications/releases/pihole/volumes             1.8T  128K  1.8T   1% /stripe/ix-applications/releases/pihole/volumes
stripe/ix-applications/releases/pihole/volumes/ix_volumes  1.8T  128K  1.8T   1% /stripe/ix-applications/releases/pihole/volumes/ix_volumes


However, this places both at / instead of at /mnt/ like they were previously so none of the above k3s paths work and all the nfs mounts are wrong, etc.

How do I get zpool to import those to the proper path?

Or better yet, how do I get TrueNAS to start mounting the pools properly in the first place? and if there is something wrong with the pools that I am missing how do I identify and fix that?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Export each pool via zpool export <name of pool>, and them import them via zpool import -f -R /mnt <name of pool>.
 

uberthoth

Dabbler
Joined
Mar 15, 2022
Messages
11
tyvm Samuel Tai! the `-R /mnt` certainly is the key component to answering my first question, but how do I get it to stick after a reboot? And why do my exports fail here? It does succeed after the import, but not before? And the import with -f does not seem to mount the pool, but without the -f does?

Code:
root@truenas[~]# zpool export stripe
cannot open 'stripe': no such pool
root@truenas[~]# zpool import -f -R /mnt stripe
root@truenas[~]# zpool export big14           
cannot open 'big14': no such pool
root@truenas[~]# zpool import -f -R /mnt big14
root@truenas[~]# zpool export stripe           
root@truenas[~]# zpool export big14           
root@truenas[~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      3.7G     0  3.7G   0% /dev
tmpfs                     780M  9.3M  771M   2% /run
boot-pool/ROOT/22.02.2.1  229G  2.7G  227G   2% /
tmpfs                     3.9G   92K  3.9G   1% /dev/shm
tmpfs                     100M     0  100M   0% /run/lock
tmpfs                     4.0M     0  4.0M   0% /sys/fs/cgroup
tmpfs                     3.9G  5.0M  3.9G   1% /tmp
boot-pool/grub            227G  8.3M  227G   1% /boot/grub
root@truenas[~]# zpool import -R /mnt stripe
root@truenas[~]# zpool import -R /mnt big14
root@truenas[~]# df -h                     
Filesystem                                                 Size  Used Avail Use% Mounted on
udev                                                       3.7G     0  3.7G   0% /dev
tmpfs                                                      780M  9.3M  771M   2% /run
boot-pool/ROOT/22.02.2.1                                   229G  2.7G  227G   2% /
tmpfs                                                      3.9G   92K  3.9G   1% /dev/shm
tmpfs                                                      100M     0  100M   0% /run/lock
tmpfs                                                      4.0M     0  4.0M   0% /sys/fs/cgroup
tmpfs                                                      3.9G  5.0M  3.9G   1% /tmp
boot-pool/grub                                             227G  8.3M  227G   1% /boot/grub
stripe                                                     1.8T  7.9G  1.8T   1% /mnt/stripe
stripe/ix-applications                                     1.8T  128K  1.8T   1% /mnt/stripe/ix-applications
stripe/ix-applications/docker                              1.8T   21M  1.8T   1% /mnt/stripe/ix-applications/docker
stripe/ix-applications/releases                            1.8T  128K  1.8T   1% /mnt/stripe/ix-applications/releases
stripe/ix-applications/catalogs                            1.8T  5.9M  1.8T   1% /mnt/stripe/ix-applications/catalogs
stripe/ix-applications/default_volumes                     1.8T  128K  1.8T   1% /mnt/stripe/ix-applications/default_volumes
stripe/ix-applications/k3s                                 1.8T  121M  1.8T   1% /mnt/stripe/ix-applications/k3s
stripe/ix-applications/releases/pihole                     1.8T  128K  1.8T   1% /mnt/stripe/ix-applications/releases/pihole
stripe/ix-applications/releases/pihole/charts              1.8T  256K  1.8T   1% /mnt/stripe/ix-applications/releases/pihole/charts
stripe/ix-applications/releases/pihole/volumes             1.8T  128K  1.8T   1% /mnt/stripe/ix-applications/releases/pihole/volumes
stripe/ix-applications/releases/pihole/volumes/ix_volumes  1.8T  128K  1.8T   1% /mnt/stripe/ix-applications/releases/pihole/volumes/ix_volumes
big14                                                       13T  2.1T   11T  17% /mnt/big14
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
It's a matter of sequence. If the pool is imported, then it can be exported. I wasn't sure the state of your pool, so it's always best to try to export it cleanly. If the pool isn't imported, then exporting only complains it can't find that pool to export. No harm, no foul.

As for importing, the -f option tells the import to try forcing the import, and ignoring any bits left over from an unclean export. In your case, what you should do, since you had a couple cycles of clean exports and imports, is to export from the shell, and then import from the web UI. Importing from the web UI should allow your pools to mount at the next boot.
 

uberthoth

Dabbler
Joined
Mar 15, 2022
Messages
11
After exporting the pools on the shell appears to be successful no output ($? == 0). But the Web UI still has nothing when I try to import?
Screenshot_20220725_075859.png
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Do your pools appear in the pulldown?
 

uberthoth

Dabbler
Joined
Mar 15, 2022
Messages
11
No, they do not.

EDIT: ok, this sequence worked

  1. import from shell (cannot import in Web UI as there is nothing in the dropdown at this point)
  2. export from Web UI (important do NOT delete the data or config, but do check the confirmation, see screenshot below)
  3. re-import from Web UI (now the dropdown has my pools)
  4. reboot (from Web UI, if I reboot from CLI the pools will be offline again, repeat from step 1.)
  5. now all pools are up and running and the k3s application is back up and running (pihole)
1658851384152.png
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Try midclt call system.shutdown. This should command the middleware to initiate a clean shutdown with automatic pool exports, same as the UI shutdown widget.
 

mattyv316

Dabbler
Joined
Sep 13, 2021
Messages
27
Thank you @Samuel Tai and @uberthoth for this info. This helped me find my problem.
Using iDrac, I was waking a TN Scale server (Dell R720) to replicate data from my main R730, then shutting down via CLI when complete. This was causing my pools to show offline the next time it was started up. I replaced my shutdown job command with the middleware call for shutdown. So far, it is working. Thanks again
 

zsw12abc

Dabbler
Joined
Nov 22, 2022
Messages
25
Hey mate,
I followed ur 5 steps
but it failed at step 3.
It cant import my pool from both WebUI and Shell.
it shows the error: [EZFS_IO] Failed to import 'MasterPool' pool: I/O error
Do u know how to solve it?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

zsw12abc

Dabbler
Joined
Nov 22, 2022
Messages
25
@zsw12abc, what does zpool import show?
微信图片_20221123164058.jpg

Hey Samuel,
this is the screenshot from my Shell.
Now i am running `zpool import -fFX MasterPool` for 8 hours.
nothing happened.
Previously, i ran `zpool import -f MasterPool` and got `I/O` error
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I'm not sure your pool is recoverable at this point. From the zpool-import man page:
-X Used with the -F recovery option. Determines whether
extreme measures to find a valid txg should take place.
This allows the pool to be rolled back to a txg which is
no longer guaranteed to be consistent. Pools imported at
an inconsistent txg may contain uncorrectable checksum
errors. For more details about pool recovery mode, see
the -F option, above. WARNING: This option can be
extremely hazardous to the health of your pool and should
only be used as a last resort.

You should've just tried zpool import -f -F -R /mnt MasterPool. Unfortunately, you're going to have to let this import attempt run to completion, which may take a couple of days as it searches for a valid TXG. Next time, wait for help before cutting and pasting commands for which you don't know the impact.

The I/O error was probably due to the pool thinking it had been last imported on a different system, and could've been overcome with just a -F.
 

zsw12abc

Dabbler
Joined
Nov 22, 2022
Messages
25
I'm not sure your pool is recoverable at this point. From the zpool-import man page:


You should've just tried zpool import -f -F -R /mnt MasterPool. Unfortunately, you're going to have to let this import attempt run to completion, which may take a couple of days as it searches for a valid TXG. Next time, wait for help before cutting and pasting commands for which you don't know the impact.

The I/O error was probably due to the pool thinking it had been last imported on a different system, and could've been overcome with just a -F.
Thanks Samuel,
The reason I use `-fFX` is from this video.
it runs 24 hours and still running. sigh.
will check with u next time. but hope there wont be next time.
 

zsw12abc

Dabbler
Joined
Nov 22, 2022
Messages
25
I'm not sure your pool is recoverable at this point. From the zpool-import man page:


You should've just tried zpool import -f -F -R /mnt MasterPool. Unfortunately, you're going to have to let this import attempt run to completion, which may take a couple of days as it searches for a valid TXG. Next time, wait for help before cutting and pasting commands for which you don't know the impact.

The I/O error was probably due to the pool thinking it had been last imported on a different system, and could've been overcome with just a -F.
Hey @Samuel Tai , Do u know how to check the running time for the `import` command?
My machine is running for 2 days.
The total size of the pool is around 6TB.
and there is 2 tasks running for 2 days as well.
1669338252674.png

微信图片_20221125120306.jpg
 
Last edited:
Top