boot disks seems to be thrown out of the ZFS pool

tnjona

Cadet
Joined
Mar 7, 2023
Messages
8
Hello,
one of my boot disks seems to be thrown out of the ZFS pool. The following alarms are displayed in TrueNAS. In the storage dashboard, the disk shows as Unassigned, but there is no option to add it to the boot pool.

Just for information, I am virtualizing TrueNas on Proxmox. I have two volumes from different SSDs (for redundancy) passed to TureNAS.


Code:
Boot pool status is DEGRADED: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state..


Code:
New ZFS version or feature flags are available for pool 'boot-pool'. Upgrading pools is a one-time process that can prevent rolling the system back to an earlier TrueNAS version. It is recommended to read the TrueNAS release notes and confirm you need the new ZFS feature flags before upgrading a pool.



zpool status outputs the following:

Code:
root@pTrueNAS[~]# zpool status
  pool: boot-pool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:00:18 with 0 errors on Tue Mar  7 03:45:19 2023
config:

        NAME                      STATE     READ WRITE CKSUM
        boot-pool                 DEGRADED     0     0     0
          mirror-0                DEGRADED     0     0     0
            13932497275889098375  UNAVAIL      0     0     0  was /dev/sde
            sdd3                  ONLINE       0     0     0

errors: No known data errors



How can I mount the disk again?
I would be happy if someone can help me ...
 
Joined
Jun 15, 2022
Messages
674
Check your cabling. Pull the drive and check it in another system and try to narrow down if it's the drive, cable, drive controller, bad power connection, etc.

Sometimes drives get randomly dropped if running a RAID card in JBOD mode instead of running IT firmware.
 

tnjona

Cadet
Joined
Mar 7, 2023
Messages
8
Check your cabling. Pull the drive and check it in another system and try to narrow down if it's the drive, cable, drive controller, bad power connection, etc.

Sometimes drives get randomly dropped if running a RAID card in JBOD mode instead of running IT firmware.
Thanks for the answer but the problem is the boot pool. The SSDs are virtual disks passed by Proxmox. The SSD runs without problems.
 

tnjona

Cadet
Joined
Mar 7, 2023
Messages
8
What confused me is that when you restore the VM from backups, the problem still occurs after some time, even if the backup was made when everything was working.

In case someone is reading about it here:
I solved the problem by moving the virtual hard disk to another SSD. The only explanation I have is that one SSD was attached to the CPU and the other to the motherboard chipset. Perhaps there was some disagreement between the two. With both VMdisks on SSDs attached to the chipset, the system runs.

I still think the problem must be with TrueNAS Scale as I have/had the same config on Core and on an Opnsense firewall (both FreeBSD based).
 
Top