All pools FAULTED after crash/hard reboot

DanteUseless

Cadet
Joined
Dec 27, 2023
Messages
8
Hi!

I'm running into an issue that started just before christmas.
tl;dr: Grub issue after hard reboot, then after reinstall all pools faulty.

Hardware:
CPU: Intel Core i3-12100 CPU
Motherboard: ASRock H670M-ITX/ax
Network cards (build in)
- 1GBit - mgmt - Intel I219V
- 2,5Gbit - vm's and shares - Dragon RTL8125BG
RAM: Corsair Vengeance LPX DDR4 2666MHz 16GB
PSU: Seasonic G12 GM semi modular 550W PSU

Drives:
1x Kingston NV2 NVMe SSD 250GB (boot-drive)
1x SAMSUNG 980 PCIE 3.0 NVME M.2 SSD 1 TB
2x Seagate BarraCuda 3.5" HDD 4TB
1x WD Ultrastar 3,5" HDD 12TB

Software:
TrueNAS SCALE 23.10.1 (the now reinstalled one).
(I'm not sure what version as was running before the crash)

Setup:
Boot from small NVME drive
VMs and other stuff wanting fast drive on second NVME drive
Mirrored pool for stuff that needed some form of redundancy
Large drive for other stuff

Short intro:
Set the system up this summer/fall with Core first, then moved over to Scale pretty early. (As Linux-user I felt alot more at home with Scale than Core). Have recently removed one single older drive that started to report errors. Everything on the dying drive was moved over to a new 12TB drive, and old drive was unplugged 13.dec.

Then on 20. dec I noticed that my SMB-shares was not accessable and I could not access the host on any service. (web, ssh or console). However a couple of VM's was running fine, and the services on them.. The system was hard rebooted, and gave me grub errors on reboot.

Search far and wide for similar issues and most tell you to first reinstall, before import config. Since this was not part of any manual upgrade I have no recent offline config, and was dreading the result.. I was hoping i could restore newer config from one of the drives.
Eventually I get around to reinstall on boot-drive/-pool, and now I'm met with the issues of not being able to import my pools. They ALL seem to be faulted..
No older config have been imported. I've only set admin-passwd and booted the system.

Side note:
I had set up remote syslog and can not find anything in the logs matching why the system crashed. From the last entry logged: It was running in this state from ~03:00 to around 20:00 when I hard rebooted it.
It was also in the remote syslog files noticed that the Kingston boot-drive/pool was giving me errors. For some reason this was not visible in the Web UI, and my notifications (email) have not warned me.


Errors when trying to import pools i Web-UI:
Code:
[EZFS_BADDEV] Failed to import 'SingleDrive12TB' pool: cannot import 'SingleDrive12TB' as 'SingleDrive12TB': one or more devices is currently unavailable


From CLI:
Code:
admin@truenas[~]$ sudo zpool import
   pool: M2_fast01
     id: 8310517784131981017
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        M2_fast01                               FAULTED  corrupted data
          f8cf9043-3f34-48a5-9dda-6276e1b8983d  ONLINE

   pool: MirrorPool
     id: 8329036457983803930
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        MirrorPool                                FAULTED  corrupted data
          mirror-0                                ONLINE
            8b4e2b1c-401b-11ed-9a25-6045cb84d322  ONLINE
            8b461bb4-401b-11ed-9a25-6045cb84d322  ONLINE

   pool: SingleDrive12TB
     id: 7967727370140335695
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        SingleDrive12TB                         FAULTED  corrupted data
          706a807e-1296-410f-8907-8f800673ae8c  ONLINE

Notice all of the pools, across different drives and tech, all have "FAULTED corrupted data".

Trying to import with -f:
Code:
admin@truenas[~]$ sudo zpool import -f SingleDrive12TB
cannot import 'SingleDrive12TB': one or more devices is currently unavailable

admin@truenas[~]$ sudo zpool import -F SingleDrive12TB
cannot import 'SingleDrive12TB': pool was previously in use from another system.
Last accessed by lagring (hostid=39353865) at Wed Dec 31 16:00:00 1969
The pool can be imported, use 'zpool import -f' to import the pool.


smartctl same drive (sda):
Code:
admin@truenas[~]$ sudo smartctl -a /dev/sda
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.63-production+truenas] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     HGST Ultrastar DC HC520 (He12)
Device Model:     HGST HUH721212ALE604
Serial Number:    D7GSKR8N
LU WWN Device Id: 5 000cca 2dfcab70c
Firmware Version: LEGNW9U0
User Capacity:    12,000,138,625,024 bytes [12.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5319
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Dec 27 12:43:31 2023 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (   87) seconds.
Offline data collection
capabilities:                    (0x5b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (1194) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   133   133   054    Pre-fail  Offline      -       92
  3 Spin_Up_Time            0x0007   202   202   024    Pre-fail  Always       -       292 (Average 361)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       27
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   128   128   020    Pre-fail  Offline      -       18
  9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       2621
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       27
 22 Helium_Level            0x0023   100   100   025    Pre-fail  Always       -       6553700
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       135
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       135
194 Temperature_Celsius     0x0002   171   171   000    Old_age   Always       -       35 (Min/Max 15/38)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      2569         -
# 2  Short offline       Completed without error       00%      2545         -
# 3  Short offline       Completed without error       00%      2521         -
# 4  Short offline       Completed without error       00%      2497         -
# 5  Short offline       Completed without error       00%      2473         -
# 6  Short offline       Completed without error       00%      2449         -
# 7  Short offline       Completed without error       00%      2425         -
# 8  Short offline       Completed without error       00%      2401         -
# 9  Short offline       Completed without error       00%      2377         -
#10  Short offline       Completed without error       00%      2353         -
#11  Short offline       Completed without error       00%      2329         -
#12  Short offline       Completed without error       00%      2305         -
#13  Short offline       Completed without error       00%      2281         -
#14  Short offline       Completed without error       00%      2257         -
#15  Short offline       Completed without error       00%      2234         -
#16  Short offline       Completed without error       00%      2210         -
#17  Short offline       Completed without error       00%      2186         -
#18  Short offline       Completed without error       00%      2162         -
#19  Short offline       Completed without error       00%      2138         -
#20  Short offline       Completed without error       00%      2114         -
#21  Short offline       Completed without error       00%      2090         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


How could this happened to all my pools? Could it be some kind of system failure (hardware/motherboard)? Any suggestions..?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The very first thing I will ask of you, Did you do any stability testing?
1) MemTest86 or MemTest86+ for at least 3 full passes.
2) CPU Stress Test such as Prime95 for at least 30 minutes, many people who are serious about a NAS will run this for days and even up to 30 days in a corporate environment.

Run them again, make sure your system is stable.

You say you are running VM's, and you only have 16GB RAM. Examine your SWAP File Space, ensure it is running at zero "0" or no more than 1k. If you see the swap file being used, it means you ran out of RAM.

Your 12TB drive statistics looks good except it does not appear you are running a SMART Long/Extended test, unless you are running it monthly. I recommend once a week for the long test, and daily short tests.
Trying to import with -f:
Why? Your pools are online, they are imported. They have errors. Run a scrub on each pool, one at a time, AFTER the stability tests. If your system is not stable, you could be causing more harm.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Why? Your pools are online, they are imported. They have errors. Run a scrub on each pool, one at a time, AFTER the stability tests. If your system is not stable, you could be causing more harm.
No, the pools were not imported. That output was from zpool import, not status.


Back to the OP / Original Poster:

ZFS is normally quite resilient, over coming many problems. But, power can be a problem that ZFS can't overcome. Having a 550 watt power supply seems like enough power, but perhaps the power supply it's self failed on the disk side power delivery.

Next, go into your BIOS and make sure all over-clocking is off. Including any memory over-clocking. It may not restore your pools to function, but if you find that over-clocking was enabled, that might have caused the ZFS pool corruption.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Seagate Barracuda - these are probably SMR drives.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Seagate Barracuda - these are probably SMR drives.
Yes, I think many of the Barracudas are now SMR. That would explain that pool's failure.

Though I don't understand why the other 2 pools failed.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I agree - there is something else here.
Are all the disks listed in Storage/Disks?
 

DanteUseless

Cadet
Joined
Dec 27, 2023
Messages
8
Seagate Barracuda - these are probably SMR drives.
Quite right, those are SMR drives..
Code:
admin@truenas[~]$ sudo smartctl -a /dev/sdb
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.63-production+truenas] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate BarraCuda 3.5 (SMR)
Device Model:     ST4000DM004-2U9104
Serial Number:    WW608B2L
<snip>


I agree - there is something else here.
Are all the disks listed in Storage/Disks?
Yep, see attached image/screenshot:

truenas-disks01.PNG
 

DanteUseless

Cadet
Joined
Dec 27, 2023
Messages
8
An update:
I've been running MemTest86 and actually found some issues..! After testing individual modules I found out that it basicly needed reseating, and now runs (multiple full tests) MemTest86 fine. Several days spent testing different scenarios/setups and all I needed was reseating (something I should have done after the first verified failed runs).

I've also been running long SMART-tests and none of the drives in "production" have reported anything. The drive initially used for booting have been removed from the system, and I've reinstalled TrueNAS SCALE on another SSD.

I'm now in the position where I MAY know why I got here, but still I cannot import the pool. I know the setup may not be the best, but I'm wondering if anyone have any suggestion going forward.
 

DanteUseless

Cadet
Joined
Dec 27, 2023
Messages
8
@DanteUseless combine both cases of F into a single command as in

zpool import -fF YourPoolName

There are more aggressive rewind options, but let's try this first.

Thank you for you quick reply! I've tried some variant of these earlier (see my first post).

But just ran it again:
Code:
root@truenas[~]# zpool import -f SingleDrive12TB
cannot import 'SingleDrive12TB': one or more devices is currently unavailable
root@truenas[~]# zpool import -fF SingleDrive12TB
cannot import 'SingleDrive12TB': one or more devices is currently unavailable
root@truenas[~]#
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thank you for you quick reply! I've tried some variant of these earlier (see my first post).

But just ran it again:
Code:
root@truenas[~]# zpool import -f SingleDrive12TB
cannot import 'SingleDrive12TB': one or more devices is currently unavailable
root@truenas[~]# zpool import -fF SingleDrive12TB
cannot import 'SingleDrive12TB': one or more devices is currently unavailable
root@truenas[~]#

Looks like we need to go deeper, but "FAULTED" on a metadata error doesn't inspire confidence.

Code:
echo 0 >> /sys/module/zfs/parameters/spa_load_verify_data
echo 0 >> /sys/module/zfs/parameters/spa_load_verify_metadata
zpool import -fX SingleDrive12TB


The first two lines disable verification of data and metadata. Yes, this is normally a really bad thing to do, but when rewinding pools to earlier transactions, the verification can take "hours to days" on large pools.
 
Last edited:

DanteUseless

Cadet
Joined
Dec 27, 2023
Messages
8
Looks like we need to go deeper, but "FAULTED" on a metadata error doesn't inspire confidence.

Code:
echo 0 >> /sys/module/zfs/parameters/spa_load_verify_data
echo 0 >> /sys/module/zfs/parameters/spa_load_verify_metadata
zpool import -fXT SingleDrive12TB


The first two lines disable verification of data and metadata. Yes, this is normally a really bad thing to do, but when rewinding pools to earlier transactions, the verification can take "hours to days" on large pools.
Hm..

Code:
root@truenas[~]# zpool import -fXT SingleDrive12TB
invalid txg value
usage:
        import [-d dir] [-D]
        import [-o mntopts] [-o property=value] ...
            [-d dir | -c cachefile] [-D] [-l] [-f] [-m] [-N] [-R root] [-F [-n]] -a
        import [-o mntopts] [-o property=value] ...
            [-d dir | -c cachefile] [-D] [-l] [-f] [-m] [-N] [-R root] [-F [-n]]
            [--rewind-to-checkpoint] <pool | id> [newpool]
root@truenas[~]# man zpool-import
root@truenas[~]#

Looks like -X needs -F, -T implies -FX and -T needs some tag

Messing round with zdb:
Code:
root@truenas[~]# zdb -e SingleDrive12TB

Configuration for import:
        vdev_children: 1
        version: 5000
        pool_guid: 7967727370140335695
        name: 'SingleDrive12TB'
        state: 0
        hostid: 959789157
        hostname: '<redacted>'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 7967727370140335695
            children[0]:
                type: 'disk'
                id: 0
                guid: 1899871018480349724
                whole_disk: 0
                metaslab_array: 256
                metaslab_shift: 34
                ashift: 12
                asize: 11997986160640
                is_log: 0
                DTL: 13820
                create_txg: 4
                path: '/dev/disk/by-partuuid/706a807e-1296-410f-8907-8f800673ae8c'
        load-policy:
            load-request-txg: 18446744073709551615
            load-rewind-policy: 2
zdb: can't open 'SingleDrive12TB': No such device or address

ZFS_DBGMSG(zdb) START:
spa.c:6252:spa_import(): spa_import: importing SingleDrive12TB
spa_misc.c:418:spa_load_note(): spa_load(SingleDrive12TB, config trusted): LOADING
spa_misc.c:404:spa_load_failed(): spa_load(SingleDrive12TB, config untrusted): FAILED: no valid uberblock found
spa_misc.c:418:spa_load_note(): spa_load(SingleDrive12TB, config untrusted): UNLOADING
ZFS_DBGMSG(zdb) END
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Apologies - zpool import -fFX SingleDrive12TB will attempt the more aggressive rollback without needing to target a specific txg with the -T.

If you're going to go for specific TXGs, try also

Code:
for n in {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,ab,ac,ad,ae,af,ag}; do
zdb -l "/dev/sd"$n"2" | grep 'name\|txg'; done


This will pull the ZFS label information from each drive - see which txg all members of a pool agree on, and roll back one txg at a time from there.
 
Last edited:

DanteUseless

Cadet
Joined
Dec 27, 2023
Messages
8
Apologies - zpool import -fX SingleDrive12TB will attempt the more aggressive rollback without needing to target a specific txg with the -T.

If you're going to go for specific TXGs, try also

Code:
for n in {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,ab,ac,ad,ae,af,ag}; do
zdb -l "/dev/sd"$n"2" | grep 'name\|txg'


This will pull the ZFS label information from each drive - see which txg all members of a pool agree on, and roll back one txg at a time from there.
Seems like some small typo again ;)
Code:
root@truenas[~]# zpool import -fX SingleDrive12TB
-n or -X only meaningful with -F
usage:
<snip>
root@truenas[~]# zpool import -FX SingleDrive12TB
cannot import 'SingleDrive12TB': pool was previously in use from another system.
Last accessed by lagring (hostid=39353865) at Wed Dec 31 16:00:00 1969
The pool can be imported, use 'zpool import -f' to import the pool.
root@truenas[~]# zpool import -f SingleDrive12TB
cannot import 'SingleDrive12TB': one or more devices is currently unavailable

Your oneliner seems to be missing ";done":
Code:
root@truenas[~]# for n in {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,ab,ac,ad,ae,af,ag}; do zdb -l "/dev/sd"$n"2" | grep 'name\|txg'; done
    name: 'SingleDrive12TB'
    txg: 1372491
    hostname: 'lagring'
        create_txg: 4
    name: 'MirrorPool'
    txg: 5940069
    hostname: 'lagring'
        create_txg: 4
            create_txg: 4
            create_txg: 4
    name: 'MirrorPool'
    txg: 5940069
    hostname: 'lagring'
        create_txg: 4
            create_txg: 4
            create_txg: 4
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
This will pull the ZFS label information from each drive - see which txg all members of a pool agree on, and roll back one txg at a time from there.
From each drive? It's a single SMR drive so would that even possibly work?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@DanteUseless Hopefully you have learned a few things, 1) SMR drives are not good for ZFS, 2) a single drive pool has no real protection.

I hope you are able to recover your data but I'm skeptical at this point.
 
Top