Importing pool crashes and server reboots

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
When I import a pool previously used for Apps the software crashes and the server reboots.

After having problems rebooting my system I decided to make a fresh install of the same, previously used version (23.10) of TrueNAS SCALE and use a recently downloaded configuration file to restore.

The restore using the downloaded configuration file failed with the outcome that the boot procedure got stuck on Starting ix-etc.service ...

Therefore I have now chosen to try to manually restore the previous configurations. I started with importing my pools. Two out of three pools went perfect to import without any issues. But importing the third pool causes the software to crash and the server reboots. The third pool was previously the pool for Apps.

How can I determine what is wrong with the third pool?

I cannot find that the crash has caused any damage to other parts of the installation and the third pool is still recognized as exported.

This was the first time I tried a fresh reinstall and uploaded a configuration file.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
zfs tends to not do well once a pool starts crashing, largely being designed with the assumption that you have backups.
you can try importing it read only.
something like zpool import -r -f
also, a zpool status -v (in code tags) can help us understand the layout
and zpool import can show us the pool with issues.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
Thanks for your response.

My hardware is listed in my signature.

The output from zpool status -v does not include the third pool:

Code:
pool: EntertainmentPool
 state: ONLINE
  scan: scrub repaired 0B in 1 days 08:08:37 with 0 errors on Sun Feb 25 23:08:39 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        EntertainmentPool                         ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            f104358a-de95-452f-b7fc-f15fc562aaa8  ONLINE       0     0     0
            44ffab2d-c584-4258-aa8b-4836779baa61  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            fc2c2b8a-7e45-4d8f-b590-8a7324964ac3  ONLINE       0     0     0
            e15cf595-50ca-42d2-85a9-2024b1985f35  ONLINE       0     0     0
          mirror-2                                ONLINE       0     0     0
            c666071c-f1c9-49fd-8964-40cd77ebb90e  ONLINE       0     0     0
            16bede38-2a70-4627-a02f-c044dc8a03d9  ONLINE       0     0     0
        logs
          mirror-3                                ONLINE       0     0     0
            a414ac30-3da1-419d-b224-bf1524612cd3  ONLINE       0     0     0
            58d59abe-2839-40aa-8b8e-9cf9a16be814  ONLINE       0     0     0

errors: No known data errors

  pool: PreciousPool
 state: ONLINE
  scan: scrub repaired 0B in 00:59:09 with 0 errors on Sat Feb 24 15:59:11 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        PreciousPool                              ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            ec105e84-6075-4087-9cec-33ff570f5d47  ONLINE       0     0     0
            45f451cd-f3cd-4433-af31-f8385ae38562  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            158f45c6-b7a8-4ee1-94a7-7537d734d236  ONLINE       0     0     0
            d67efb89-4229-4732-a9b3-bb93842b5765  ONLINE       0     0     0
          mirror-2                                ONLINE       0     0     0
            a704cba2-f6f9-404e-a647-35fd9e177b74  ONLINE       0     0     0
            3cb1308f-0e09-4533-b32f-8359d5c283ff  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sdg3      ONLINE       0     0     0

errors: No known data errors


The output from zpool import includes information about the third pool:

Code:
pool: HostingPool
     id: 2797520581735915573
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:


        HostingPool                               ONLINE
          indirect-0                              ONLINE
          indirect-1                              ONLINE
          indirect-2                              ONLINE
          indirect-3                              ONLINE
          mirror-4                                ONLINE
            6ce08ecc-3580-4126-a957-dbdd4921dc03  ONLINE
            c81b76c9-1ae7-4265-8096-a66ff57a92e9  ONLINE
          mirror-5                                ONLINE
            dd047bee-d2db-42ba-9e35-24596fae8b7d  ONLINE
            276a0f8f-d3be-4133-a7a3-34bfd8f0fbfa  ONLINE
          mirror-6                                ONLINE
            d5b4df4c-dc46-4a2e-afe5-196ef7f3dc25  ONLINE
            90c07787-aa3c-4fd4-a55e-e2281cefe208  ONLINE
          mirror-7                                ONLINE
            224a82f3-697a-4e30-a16c-c4bd5f756dff  ONLINE
            a49bd2e3-2dc4-4078-9041-f7c4af52bb6c  ONLINE


The mirror members of the pool are NVMe drives inserted into two ASRock Hyper Quad M.2 cards inserted into two PCI-express 16X slots configured 4x4x4x4.

What does 'indirect' in the members of the pool mean?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The "indirect" is almost certainly a Mirror vDev that was removed, making it a virtual vDev, and any data on it, moved to physical Mirror vDevs. Looks like you did 4 vDev removals.

As to why your "HostingPool" is unimportable, I don't know.

You can try the command that @artlessknave suggested;
zpool import -o readonly=on -f HostingPool
But, be prepared for a server reboot if it fails.

Their are other import options, like "F" and "X". You should view the manual page for zpool-import.


Please note that ZFS was specifically designed to survive graceless reboots, (aka crash or power loss), multiple times, WITHOUT pool damage. This was a requirement at Sun Microsystems, (which developed ZFS), when dealing with huge pools.

Now to be clear, data in flight can be lost, just like any other file system. But, existing data should remain intact. Of course a hardware fault could occur during a graceless reboot which impacts existing data.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
In the System Settings Shell I tried sudo zpool import HostingPool -F but got the output:
Code:
cannot mount '/HostingPool': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets%


The pool was now imported but could not be mounted as shown in the output from sudo zpool status -v:
Code:
NAME                                      STATE     READ WRITE CKSUM
        HostingPool                               ONLINE       0     0     0
          mirror-4                                ONLINE       0     0     0
            6ce08ecc-3580-4126-a957-dbdd4921dc03  ONLINE       0     0     0
            c81b76c9-1ae7-4265-8096-a66ff57a92e9  ONLINE       0     0     0
          mirror-5                                ONLINE       0     0     0
            dd047bee-d2db-42ba-9e35-24596fae8b7d  ONLINE       0     0     0
            276a0f8f-d3be-4133-a7a3-34bfd8f0fbfa  ONLINE       0     0     0
          mirror-6                                ONLINE       0     0     0
            d5b4df4c-dc46-4a2e-afe5-196ef7f3dc25  ONLINE       0     0     0
            90c07787-aa3c-4fd4-a55e-e2281cefe208  ONLINE       0     0     0
          mirror-7                                ONLINE       0     0     0
            224a82f3-697a-4e30-a16c-c4bd5f756dff  ONLINE       0     0     0
            a49bd2e3-2dc4-4078-9041-f7c4af52bb6c  ONLINE       0     0     0


errors: No known data errors


Although imported, the HostingPool was not shown with the other pools in the Storage Dashboard
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
These lines were lost in the copy-n-paste procedure from the shell and were also included in the output of sudo zpool status -v:

Code:
pool: HostingPool
 state: ONLINE
  scan: scrub repaired 0B in 00:01:39 with 0 errors on Sat Feb 24 15:01:42 2024
remove: Removal of vdev 1 copied 172G in 0h4m, completed on Sun Mar  3 12:17:42 2024
        2.72M memory used for removed device mappings
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
I got the same output/outcome using sudo zpool import HostingPool -o readonly=on
i e
cannot mount '/HostingPool': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets%

The pool was exported prior to trying import again.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Oh, I forgot an option. It appears that the pool was attempting to mount in the root FS at "/HostingPool", not "/mnt/HostingPool".

Export the pool and re-import;
zpool export HostingPool zpool import -R /mnt HostingPool

You may or may not need the read only option again. And you likely don't need the "-F" capital F option again. It has already thrown away the problematic recent write. In fact, re-running with "-F" can be harmful.

It may even be possible to import the pool from the GUI, which is preferred. (After of course, exporting it from the command line.)


One thing that has become clear to me in the last 6 months, during trouble shooting of a TrueNAS server, command line work may be required. Whether it is as simple as a ZFS import, or more complicated trouble shooting. So learning about Unix & ZFS is helpful in the long run for any TrueNAS Admin.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Not sure, as I don't use that feature often enough to have it memorized.

But, if the pool is able to import now, then perhaps the GUI import would be best.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
Thanks for discovering the lack of '/mnt' but unfortunately it didn't solve my problem.

I tried:
1. import the pool HostingPool via the GUI
2. zpool import HostingPool -R /mnt
3. zpool import HostingPool -R /mnt/HostingPool
... but all three attempts caused the server to crash and reboot

So my conclusion is that the pool HostingPool is importable but not fit to be mounted.

I have some Linux (Ubuntu/Debian) experience gathered prior going for TrueNAS SCALE, but no previous ZFS experience.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The pool imported read only, but failed to mount the datasets because I missed the alternate mount prefix.

Try one of these;
zpool import -o readonly=on HostingPool -R /mnt zpool import -o readonly=on HostingPool -R /mnt/HostingPool

This may allow you read only access to your data. And it may be possible to copy off data.

ZFS is normally quite robust and odd problems with importing pools is rare. Generally something like hardware RAID or possibly a bit flipped in RAM due to lack of ECC could be the cause. Neither seem to be your case based on your hardware specifications.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
Success.
In the System Settings shell I was able to import and mount the HostingPool using sudo zpool import -o readonly=on HostingPool -R /mnt
Now it was possible to browse the content(directories and files) of the datasets in the pool.

In the GUI the HostingPool did not appear in the Storage Dashboard but was shown in the Datasets.

I assume the pool is now mounted as a readonly pool and I cannot resume using it for my apps' config and data yet. Correct?
Is there a way to make the pool writeable again or do I have to copy the content of the pool elsewhere and then rebuild the pool from scratch?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, it appears your Pool's Datasets are mounted, and mounted as Read Only.

Your current option is to copy all the data off to alternate source, and re-create your pool. Then copy your data back.

Something in the pool is corrupt, and I can't help remotely to figure it out. It is possible that during the copy off, you may run across the pool corruption. So monitor the pool's status with;
zpool status -v HostingPool
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
I assume the running the sudo cp * -r <destination path> command with is sufficient to copy the directories and files in each dataset (ix-applications and others) to a different dataset in a different pool, but what about copying zvols?
One of the datasets contains a zvol.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Copying zVols depends on what you want to do with the data. They are simply containers for bytes. So a "dd" copy would work, if you had a destination in mind.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
to a different dataset in a different pool, but what about copying zvols?
One of the datasets contains a zvol.
Another pool on your truenas sever?

zfs send source/path/to/zvol | zfs receive destination/path/to/zvol
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
note: i would use rsync. resumablity will mean that if you pool crashes while reading you can just run the command again if/when you get it mounted again.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
I found a different thread on zvol cloning:
https://www.truenas.com/community/threads/zvol-cloning.61350/
... and in that thread they stated that a snapshot was required to use send and receive:
Code:
# zfs snapshot pool1/zvol@tobecloned
# zfs send pool1/zvol@tobecloned | zfs recv pool2/zvol

Can I create a snapshot if the pool is imported as readonly?

I thought rsync was insufficent for copying zvol since they are not visible within the directories and files of a dataset
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
... and in that thread they stated that a snapshot was required to use send and receive:
For zvols the snapshot is automatically created, in this case --head--

Code:
admin@truenas[~]$ sudo zfs send mars/deimos/ubuntu-nginx | sudo zfs receive -v neptune/test-dataset/ubuntu-new
receiving full stream of mars/deimos/ubuntu-nginx@--head-- into neptune/test-dataset/ubuntu-new@--head--
received 5.86G stream in 17.45 seconds (344M/sec)


Code:
admin@truenas[~]$ sudo zfs list -t snapshot neptune/test-dataset/ubuntu-new
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
neptune/test-dataset/ubuntu-new@--head--     0B      -  3.71G  -

Code:
admin@truenas[~]$ sudo zfs list -t snapshot mars/deimos/ubuntu-nginx
NAME                                             USED  AVAIL  REFER  MOUNTPOINT
mars/deimos/ubuntu-nginx@auto-2024-01-23_06-00   230M      -  3.54G  -
mars/deimos/ubuntu-nginx@auto-2024-01-30_06-00   132M      -  3.54G  -
mars/deimos/ubuntu-nginx@auto-2024-02-02_06-00   131M      -  3.54G  -

and it appears that at least it does not persist on the source system. You would need to test if the readonly flag will interfere here.
 
Top