Importing corrupted pool

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
Hi,

due to unknown reasons my TrueNAS scale pool lost a drive. I've replaced it and noticed that the system can't replace the drive. I left with the following status:

Code:
admin@truenas:~$ sudo zpool import
   pool: storage
     id: 3743818536692763833
  state: DEGRADED
status: One or more devices were being resilvered.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
 config:

        storage                                   DEGRADED
          raidz1-0                                DEGRADED
            1dfff5e5-c3d7-4a66-a880-16a844bbd7a2  ONLINE
            21141c85-a074-4a23-98ce-b0d9a3336ceb  ONLINE
            ab95ebc6-2ed8-11ea-a963-fcaa142a1593  UNAVAIL
            923db6a0-3207-11ea-b9f1-fcaa142a1593  ONLINE
admin@truenas:~$
admin@truenas:~$ sudo zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool    14G  5.23G  8.77G        -         -     3%    37%  1.00x    ONLINE  -
admin@truenas:~$


So as you can see the pool is not visible when tryig to list it - anyhow when i issue import command - i can see it.

I was trying to import it using

Code:
admin@truenas:~$ sudo zpool import -F -a -n


anyhow it shows the following error:

Code:
admin@truenas:~$ sudo zpool import -F -a -n
cannot import 'storage': pool was previously in use from another system.
Last accessed by freenas (hostid=32333165) at Wed Sep  6 16:59:54 2023
The pool can be imported, use 'zpool import -f' to import the pool.
admin@truenas:~$


anyhow after issuing import with -f oprion the cmd freezes and the following errors are visible in logs:

Code:
Sep 21 10:31:20 truenas zed[74925]: eid=15 class=data pool='storage' priority=0 err=52 flags=0x808081 bookmark=0:115249:0:3
Sep 21 10:31:20 truenas zed[74930]: eid=16 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=28672 offset=850561347584 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 10:31:20 truenas zed[74937]: eid=18 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=28672 offset=1805837819904 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 10:31:20 truenas zed[74934]: eid=17 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=28672 offset=850561347584 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 10:31:20 truenas zed[74940]: eid=21 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=28672 offset=973714276352 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 10:31:20 truenas zed[74941]: eid=20 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=28672 offset=973714276352 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 10:31:20 truenas zed[74942]: eid=19 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=28672 offset=1805837815808 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 10:31:21 truenas zed[74950]: eid=22 class=data pool='storage' priority=0 err=52 flags=0x808081 bookmark=0:116307:0:2
Sep 21 10:31:21 truenas zed[74955]: eid=27 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=8192 offset=973946433536 priority=0 err=52 flags=0x100080 bookmark=0:116307:0:2
Sep 21 10:31:21 truenas zed[74959]: eid=26 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=8192 offset=1451735969792 priority=0 err=52 flags=0x100080 bookmark=0:116307:0:2
Sep 21 10:31:21 truenas zed[74961]: eid=23 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=8192 offset=1456541413376 priority=0 err=52 flags=0x100080 bookmark=0:116307:0:2
Sep 21 10:31:21 truenas zed[74963]: eid=25 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=8192 offset=1451735973888 priority=0 err=52 flags=0x100080 bookmark=0:116307:0:2
Sep 21 10:31:21 truenas zed[74962]: eid=24 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=8192 offset=1456541409280 priority=0 err=52 flags=0x100080 bookmark=0:116307:0:2
Sep 21 10:31:21 truenas zed[74964]: eid=28 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=8192 offset=973946433536 priority=0 err=52 flags=0x100080 bookmark=0:116307:0:2
Sep 21 10:31:23 truenas kernel: WARNING: Pool 'storage' has encountered an uncorrectable I/O failure and has been suspended.

Sep 21 10:31:23 truenas zed[74976]: eid=29 class=data pool='storage' priority=0 err=52 flags=0x808001 bookmark=0:119186:0:5
Sep 21 10:31:23 truenas zed[74977]: eid=30 class=io_failure pool='storage'

Sep 21 10:31:23 truenas zed[74976]: eid=29 class=data pool='storage' priority=0 err=52 flags=0x808001 bookmark=0:119186:0:5
Sep 21 10:31:23 truenas zed[74977]: eid=30 class=io_failure pool='storage'
Sep 21 10:35:19 truenas kernel: INFO: task middlewared (wo:53916 blocked for more than 120 seconds.
Sep 21 10:35:19 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:35:19 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:35:19 truenas kernel: task:middlewared (wo state:D stack:0     pid:53916 ppid:1432   flags:0x00000002
Sep 21 10:35:19 truenas kernel: Call Trace:
Sep 21 10:35:19 truenas kernel:  <TASK>
Sep 21 10:35:19 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:35:19 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:35:19 truenas kernel:  schedule_preempt_disabled+0x14/0x30
Sep 21 10:35:19 truenas kernel:  __mutex_lock.constprop.0+0x3b4/0x700
Sep 21 10:35:19 truenas kernel:  spa_open_common+0x65/0x440 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_get_stats+0x4a/0x210 [zfs]
Sep 21 10:35:19 truenas kernel:  ? spl_kmem_alloc_impl+0x87/0xd0 [spl]
Sep 21 10:35:19 truenas kernel:  zfs_ioc_pool_stats+0x3c/0x90 [zfs]
Sep 21 10:35:19 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 10:35:19 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 10:35:19 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 10:35:19 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 10:35:19 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 10:35:19 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:35:19 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:35:19 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:35:19 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:35:19 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:35:19 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:35:19 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:35:19 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:35:19 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 10:35:19 truenas kernel: RIP: 0033:0x7ffa5a1d2afb
Sep 21 10:35:19 truenas kernel: RSP: 002b:00007ffe0d773c50 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 10:35:19 truenas kernel: RAX: ffffffffffffffda RBX: 0000000003c42290 RCX: 00007ffa5a1d2afb
Sep 21 10:35:19 truenas kernel: RDX: 00007ffe0d773cd0 RSI: 0000000000005a05 RDI: 0000000000000019
Sep 21 10:35:19 truenas kernel: RBP: 00007ffe0d7772c0 R08: 00007ffa5a2a8430 R09: 00007ffa5a2a8430
Sep 21 10:35:19 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe0d773cd0
Sep 21 10:35:19 truenas kernel: R13: 0000000003c42290 R14: 0000000004c83fc0 R15: 00007ffe0d7772d4
Sep 21 10:35:19 truenas kernel:  </TASK>
Sep 21 10:35:19 truenas kernel: INFO: task zpool:74731 blocked for more than 120 seconds.
Sep 21 10:35:19 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:35:19 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:35:19 truenas kernel: task:zpool           state:D stack:0     pid:74731 ppid:74730  flags:0x00004002
Sep 21 10:35:19 truenas kernel: Call Trace:
Sep 21 10:35:19 truenas kernel:  <TASK>
Sep 21 10:35:19 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:35:19 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:35:19 truenas kernel:  io_schedule+0x42/0x70
Sep 21 10:35:19 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Sep 21 10:35:19 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 21 10:35:19 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Sep 21 10:35:19 truenas kernel:  txg_wait_synced+0xc/0x40 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_load_impl.constprop.0+0x281/0x3c0 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_load+0x64/0x120 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_load_best+0x54/0x250 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_import+0x22d/0x680 [zfs]
Sep 21 10:35:19 truenas kernel:  zfs_ioc_pool_import+0x157/0x180 [zfs]
Sep 21 10:35:19 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 10:35:19 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 10:35:19 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 10:35:19 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 10:35:19 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 10:35:19 truenas kernel:  ? __irq_exit_rcu+0x2d/0x130
Sep 21 10:35:19 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 10:35:19 truenas kernel: RIP: 0033:0x7f0674abeafb
Sep 21 10:35:19 truenas kernel: RSP: 002b:00007ffeefec83f0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 10:35:19 truenas kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0674abeafb
Sep 21 10:35:19 truenas kernel: RDX: 00007ffeefec84b0 RSI: 0000000000005a02 RDI: 0000000000000003
Sep 21 10:35:19 truenas kernel: RBP: 00007ffeefecc3a0 R08: 00007f0674b94430 R09: 00007f0674b94430
Sep 21 10:35:19 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005578fa674dd0
Sep 21 10:35:19 truenas kernel: R13: 00007ffeefec84b0 R14: 00007f065c00a270 R15: 00005578fa7370a0
Sep 21 10:35:19 truenas kernel:  </TASK>
Sep 21 10:35:19 truenas kernel: INFO: task txg_sync:74913 blocked for more than 120 seconds.
Sep 21 10:35:19 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:35:19 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:35:19 truenas kernel: task:txg_sync        state:D stack:0     pid:74913 ppid:2      flags:0x00004000
Sep 21 10:35:19 truenas kernel: Call Trace:
Sep 21 10:35:19 truenas kernel:  <TASK>
Sep 21 10:35:19 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:35:19 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:35:19 truenas kernel:  schedule_timeout+0x94/0x150
Sep 21 10:35:19 truenas kernel:  ? __bpf_trace_tick_stop+0x10/0x10
Sep 21 10:35:19 truenas kernel:  io_schedule_timeout+0x4c/0x80
Sep 21 10:35:19 truenas kernel:  __cv_timedwait_common+0x12a/0x160 [spl]
Sep 21 10:35:19 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 21 10:35:19 truenas kernel:  __cv_timedwait_io+0x15/0x20 [spl]
Sep 21 10:35:19 truenas kernel:  zio_wait+0x10b/0x220 [zfs]
Sep 21 10:35:19 truenas kernel:  dmu_buf_will_dirty_impl+0xb1/0x190 [zfs]
Sep 21 10:35:19 truenas kernel:  dmu_write_impl+0x3f/0xd0 [zfs]
Sep 21 10:35:19 truenas kernel:  dmu_write+0xb2/0x110 [zfs]
Sep 21 10:35:19 truenas kernel:  space_map_write_intro_debug+0xaf/0xe0 [zfs]
Sep 21 10:35:19 truenas kernel:  space_map_write_impl+0x54/0x250 [zfs]
Sep 21 10:35:19 truenas kernel:  ? list_head+0x9/0x30 [zfs]
Sep 21 10:35:19 truenas kernel:  ? dmu_buf_will_dirty_impl+0x11a/0x190 [zfs]
Sep 21 10:35:19 truenas kernel:  space_map_write+0x9a/0x190 [zfs]
Sep 21 10:35:19 truenas kernel:  metaslab_flush+0xed/0x320 [zfs]
Sep 21 10:35:19 truenas kernel:  ? spa_estimate_metaslabs_to_flush+0x108/0x130 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_flush_metaslabs+0x14e/0x200 [zfs]
Sep 21 10:35:19 truenas kernel:  ? preempt_count_add+0x6a/0xa0
Sep 21 10:35:19 truenas kernel:  spa_sync_iterate_to_convergence+0x157/0x200 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_sync+0x306/0x5d0 [zfs]
Sep 21 10:35:19 truenas kernel:  txg_sync_thread+0x1e4/0x250 [zfs]
Sep 21 10:35:19 truenas kernel:  ? txg_dispatch_callbacks+0xf0/0xf0 [zfs]
Sep 21 10:35:19 truenas kernel:  ? sigorsets+0x10/0x10 [spl]
Sep 21 10:35:19 truenas kernel:  thread_generic_wrapper+0x57/0x70 [spl]
Sep 21 10:35:19 truenas kernel:  kthread+0xe6/0x110
Sep 21 10:35:19 truenas kernel:  ? kthread_complete_and_exit+0x20/0x20
Sep 21 10:35:19 truenas kernel:  ret_from_fork+0x1f/0x30
Sep 21 10:35:19 truenas kernel:  </TASK>
Sep 21 10:35:19 truenas kernel: INFO: task zpool:74987 blocked for more than 120 seconds.
Sep 21 10:35:19 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:35:19 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:35:19 truenas kernel: task:zpool           state:D stack:0     pid:74987 ppid:74986  flags:0x00000002
Sep 21 10:35:19 truenas kernel: Call Trace:
Sep 21 10:35:19 truenas kernel:  <TASK>
Sep 21 10:35:19 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:35:19 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:35:19 truenas kernel:  schedule_preempt_disabled+0x14/0x30
Sep 21 10:35:19 truenas kernel:  __mutex_lock.constprop.0+0x3b4/0x700
Sep 21 10:35:19 truenas kernel:  ? release_pages+0x168/0x4e0
Sep 21 10:35:19 truenas kernel:  spa_open_common+0x65/0x440 [zfs]
Sep 21 10:35:19 truenas kernel:  spa_get_stats+0x4a/0x210 [zfs]
Sep 21 10:35:19 truenas kernel:  ? spl_kmem_alloc_impl+0x87/0xd0 [spl]
Sep 21 10:35:19 truenas kernel:  zfs_ioc_pool_stats+0x3c/0x90 [zfs]
Sep 21 10:35:19 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 10:35:19 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 10:35:19 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 10:35:19 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 10:35:19 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 10:35:19 truenas kernel:  ? exc_page_fault+0x70/0x170
Sep 21 10:35:19 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 10:35:19 truenas kernel: RIP: 0033:0x7f15e739bafb
Sep 21 10:35:19 truenas kernel: RSP: 002b:00007ffeba2a6b40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 10:35:19 truenas kernel: RAX: ffffffffffffffda RBX: 00005556a3485e50 RCX: 00007f15e739bafb
Sep 21 10:35:19 truenas kernel: RDX: 00007ffeba2a6bc0 RSI: 0000000000005a05 RDI: 0000000000000004
Sep 21 10:35:19 truenas kernel: RBP: 00007ffeba2aa1b0 R08: 00005556a3485690 R09: 00007f15e7470d10
Sep 21 10:35:19 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffeba2a6bc0
Sep 21 10:35:19 truenas kernel: R13: 00005556a3485e50 R14: 00005556a347b2c0 R15: 00007ffeba2aa1c4
Sep 21 10:35:19 truenas kernel:  </TASK>
Sep 21 10:37:20 truenas kernel: INFO: task middlewared (wo:53916 blocked for more than 241 seconds.
Sep 21 10:37:20 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:37:20 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:37:20 truenas kernel: task:middlewared (wo state:D stack:0     pid:53916 ppid:1432   flags:0x00000002
Sep 21 10:37:20 truenas kernel: Call Trace:
Sep 21 10:37:20 truenas kernel:  <TASK>
Sep 21 10:37:20 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:37:20 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:37:20 truenas kernel:  schedule_preempt_disabled+0x14/0x30
Sep 21 10:37:20 truenas kernel:  __mutex_lock.constprop.0+0x3b4/0x700
Sep 21 10:37:20 truenas kernel:  spa_open_common+0x65/0x440 [zfs]
Sep 21 10:37:20 truenas kernel:  spa_get_stats+0x4a/0x210 [zfs]
Sep 21 10:37:20 truenas kernel:  ? spl_kmem_alloc_impl+0x87/0xd0 [spl]
Sep 21 10:37:20 truenas kernel:  zfs_ioc_pool_stats+0x3c/0x90 [zfs]
Sep 21 10:37:20 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 10:37:20 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 10:37:20 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 10:37:20 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 10:37:20 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 10:37:20 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:37:20 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:37:20 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:37:20 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:37:20 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:37:20 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 10:37:20 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:37:20 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 10:37:20 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 10:37:20 truenas kernel: RIP: 0033:0x7ffa5a1d2afb
Sep 21 10:37:20 truenas kernel: RSP: 002b:00007ffe0d773c50 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 10:37:20 truenas kernel: RAX: ffffffffffffffda RBX: 0000000003c42290 RCX: 00007ffa5a1d2afb
Sep 21 10:37:20 truenas kernel: RDX: 00007ffe0d773cd0 RSI: 0000000000005a05 RDI: 0000000000000019
Sep 21 10:37:20 truenas kernel: RBP: 00007ffe0d7772c0 R08: 00007ffa5a2a8430 R09: 00007ffa5a2a8430
Sep 21 10:37:20 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe0d773cd0
Sep 21 10:37:20 truenas kernel: R13: 0000000003c42290 R14: 0000000004c83fc0 R15: 00007ffe0d7772d4
Sep 21 10:37:20 truenas kernel:  </TASK>
Sep 21 10:37:20 truenas kernel: INFO: task zpool:74731 blocked for more than 241 seconds.
Sep 21 10:37:20 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:37:20 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:37:20 truenas kernel: task:zpool           state:D stack:0     pid:74731 ppid:74730  flags:0x00004002
Sep 21 10:37:20 truenas kernel: Call Trace:
Sep 21 10:37:20 truenas kernel:  <TASK>
Sep 21 10:37:20 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:37:20 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:37:20 truenas kernel:  io_schedule+0x42/0x70
Sep 21 10:37:20 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Sep 21 10:37:20 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 21 10:37:20 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Sep 21 10:37:20 truenas kernel:  txg_wait_synced+0xc/0x40 [zfs]
Sep 21 10:37:20 truenas kernel:  spa_load_impl.constprop.0+0x281/0x3c0 [zfs]
Sep 21 10:37:20 truenas kernel:  spa_load+0x64/0x120 [zfs]
Sep 21 10:37:20 truenas kernel:  spa_load_best+0x54/0x250 [zfs]
Sep 21 10:37:20 truenas kernel:  spa_import+0x22d/0x680 [zfs]
Sep 21 10:37:20 truenas kernel:  zfs_ioc_pool_import+0x157/0x180 [zfs]
Sep 21 10:37:20 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 10:37:20 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 10:37:20 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 10:37:20 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 10:37:20 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 10:37:20 truenas kernel:  ? __irq_exit_rcu+0x2d/0x130
Sep 21 10:37:20 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 10:37:20 truenas kernel: RIP: 0033:0x7f0674abeafb
Sep 21 10:37:20 truenas kernel: RSP: 002b:00007ffeefec83f0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 10:37:20 truenas kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0674abeafb
Sep 21 10:37:20 truenas kernel: RDX: 00007ffeefec84b0 RSI: 0000000000005a02 RDI: 0000000000000003
Sep 21 10:37:20 truenas kernel: RBP: 00007ffeefecc3a0 R08: 00007f0674b94430 R09: 00007f0674b94430
Sep 21 10:37:20 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005578fa674dd0
Sep 21 10:37:20 truenas kernel: R13: 00007ffeefec84b0 R14: 00007f065c00a270 R15: 00005578fa7370a0
Sep 21 10:37:20 truenas kernel:  </TASK>
Sep 21 10:37:20 truenas kernel: INFO: task zpool:74987 blocked for more than 241 seconds.
Sep 21 10:37:20 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 10:37:20 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 10:37:20 truenas kernel: task:zpool           state:D stack:0     pid:74987 ppid:74986  flags:0x00000002
Sep 21 10:37:20 truenas kernel: Call Trace:
Sep 21 10:37:20 truenas kernel:  <TASK>
Sep 21 10:37:20 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 10:37:20 truenas kernel:  schedule+0x5a/0xb0
Sep 21 10:37:20 truenas kernel:  schedule_preempt_disabled+0x14/0x30
Sep 21 10:37:20 truenas kernel:  __mutex_lock.constprop.0+0x3b4/0x700
Sep 21 10:37:20 truenas kernel:  ? release_pages+0x168/0x4e0
Sep 21 10:37:20 truenas kernel:  spa_open_common+0x65/0x440 [zfs]
Sep 21 10:37:20 truenas kernel:  spa_get_stats+0x4a/0x210 [zfs]
Sep 21 10:37:20 truenas kernel:  ? spl_kmem_alloc_impl+0x87/0xd0 [spl]
Sep 21 10:37:20 truenas kernel:  zfs_ioc_pool_stats+0x3c/0x90 [zfs]
Sep 21 10:37:20 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 10:37:20 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 10:37:20 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 10:37:20 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 10:37:20 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 10:37:20 truenas kernel:  ? exc_page_fault+0x70/0x170
Sep 21 10:37:20 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 10:37:20 truenas kernel: RIP: 0033:0x7f15e739bafb
Sep 21 10:37:20 truenas kernel: RSP: 002b:00007ffeba2a6b40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 10:37:20 truenas kernel: RAX: ffffffffffffffda RBX: 00005556a3485e50 RCX: 00007f15e739bafb
Sep 21 10:37:20 truenas kernel: RDX: 00007ffeba2a6bc0 RSI: 0000000000005a05 RDI: 0000000000000004
Sep 21 10:37:20 truenas kernel: RBP: 00007ffeba2aa1b0 R08: 00005556a3485690 R09: 00007f15e7470d10
Sep 21 10:37:20 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffeba2a6bc0
Sep 21 10:37:20 truenas kernel: R13: 00005556a3485e50 R14: 00005556a347b2c0 R15: 00007ffeba2aa1c4
Sep 21 10:37:20 truenas kernel:  </TASK>


Can you please offer me some hints how i could restore this storage?

thanks in advance!

BR,
MCC
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Running the last command without -a and -n should result in a successful pool import, so zpool import -f storage.

I don't see why you should be in a situation where you need to import the pool since you were resilvering an already imported pool.
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
Unfortunately it's exactly the same:

Code:
Sep 21 12:59:57 truenas zed[6106]: eid=6 class=data pool='storage' priority=0 err=52 flags=0x808081 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6109]: eid=7 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=28672 offset=850561347584 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6114]: eid=10 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=28672 offset=1805837815808 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6117]: eid=8 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=28672 offset=850561347584 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6119]: eid=9 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=28672 offset=1805837819904 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6120]: eid=11 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=28672 offset=973714276352 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6121]: eid=12 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=28672 offset=973714276352 priority=0 err=52 flags=0x100080 bookmark=0:115249:0:3
Sep 21 12:59:57 truenas zed[6130]: eid=13 class=data pool='storage' priority=0 err=52 flags=0x808081 bookmark=0:119186:0:5
Sep 21 12:59:57 truenas zed[6134]: eid=14 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=16384 offset=1022479888384 priority=0 err=52 flags=0x100080 bookmark=0:119186:0:5
Sep 21 12:59:57 truenas zed[6137]: eid=15 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=20480 offset=1022479888384 priority=0 err=52 flags=0x100080 bookmark=0:119186:0:5
Sep 21 12:59:57 truenas zed[6141]: eid=19 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=20480 offset=1009182138368 priority=0 err=52 flags=0x100080 bookmark=0:119186:0:5
Sep 21 12:59:57 truenas zed[6140]: eid=17 class=checksum pool='storage' vdev=923db6a0-3207-11ea-b9f1-fcaa142a1593 size=20480 offset=1452258902016 priority=0 err=52 flags=0x100080 bookmark=0:119186:0:5
Sep 21 12:59:57 truenas zed[6142]: eid=16 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=16384 offset=1452258906112 priority=0 err=52 flags=0x100080 bookmark=0:119186:0:5
Sep 21 12:59:57 truenas zed[6144]: eid=18 class=checksum pool='storage' vdev=21141c85-a074-4a23-98ce-b0d9a3336ceb size=16384 offset=1009182142464 priority=0 err=52 flags=0x100080 bookmark=0:119186:0:5
Sep 21 12:59:59 truenas kernel: WARNING: Pool 'storage' has encountered an uncorrectable I/O failure and has been suspended.


Sep 21 13:02:47 truenas kernel: INFO: task middlewared (wo:5739 blocked for more than 120 seconds.
Sep 21 13:02:47 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 13:02:47 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 13:02:47 truenas kernel: task:middlewared (wo state:D stack:0     pid:5739  ppid:1432   flags:0x00000002
Sep 21 13:02:47 truenas kernel: Call Trace:
Sep 21 13:02:47 truenas kernel:  <TASK>
Sep 21 13:02:47 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 13:02:47 truenas kernel:  schedule+0x5a/0xb0
Sep 21 13:02:47 truenas kernel:  schedule_preempt_disabled+0x14/0x30
Sep 21 13:02:47 truenas kernel:  __mutex_lock.constprop.0+0x3b4/0x700
Sep 21 13:02:47 truenas kernel:  spa_open_common+0x65/0x440 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_get_stats+0x4a/0x210 [zfs]
Sep 21 13:02:47 truenas kernel:  ? spl_kmem_alloc_impl+0x87/0xd0 [spl]
Sep 21 13:02:47 truenas kernel:  zfs_ioc_pool_stats+0x3c/0x90 [zfs]
Sep 21 13:02:47 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 13:02:47 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 13:02:47 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 13:02:47 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 13:02:47 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 13:02:47 truenas kernel:  ? exit_to_user_mode_prepare+0x175/0x1c0
Sep 21 13:02:47 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 13:02:47 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 13:02:47 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 13:02:47 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 13:02:47 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 13:02:47 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 13:02:47 truenas kernel: RIP: 0033:0x7f06318f6afb
Sep 21 13:02:47 truenas kernel: RSP: 002b:00007ffe69d808c0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 13:02:47 truenas kernel: RAX: ffffffffffffffda RBX: 00000000037a9030 RCX: 00007f06318f6afb
Sep 21 13:02:47 truenas kernel: RDX: 00007ffe69d80940 RSI: 0000000000005a05 RDI: 0000000000000019
Sep 21 13:02:47 truenas kernel: RBP: 00007ffe69d83f30 R08: 00007f06319cc430 R09: 00007f06319cc430
Sep 21 13:02:47 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe69d80940
Sep 21 13:02:47 truenas kernel: R13: 00000000037a9030 R14: 00000000047ca980 R15: 00007ffe69d83f44
Sep 21 13:02:47 truenas kernel:  </TASK>
Sep 21 13:02:47 truenas kernel: INFO: task zpool:5901 blocked for more than 120 seconds.
Sep 21 13:02:47 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 13:02:47 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 13:02:47 truenas kernel: task:zpool           state:D stack:0     pid:5901  ppid:5899   flags:0x00004002
Sep 21 13:02:47 truenas kernel: Call Trace:
Sep 21 13:02:47 truenas kernel:  <TASK>
Sep 21 13:02:47 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 13:02:47 truenas kernel:  schedule+0x5a/0xb0
Sep 21 13:02:47 truenas kernel:  io_schedule+0x42/0x70
Sep 21 13:02:47 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Sep 21 13:02:47 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 21 13:02:47 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Sep 21 13:02:47 truenas kernel:  txg_wait_synced+0xc/0x40 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_load_impl.constprop.0+0x281/0x3c0 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_load+0x64/0x120 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_load_best+0x54/0x250 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_import+0x22d/0x680 [zfs]
Sep 21 13:02:47 truenas kernel:  zfs_ioc_pool_import+0x157/0x180 [zfs]
Sep 21 13:02:47 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 21 13:02:47 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 21 13:02:47 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 21 13:02:47 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 21 13:02:47 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 21 13:02:47 truenas kernel:  ? exit_to_user_mode_prepare+0x175/0x1c0
Sep 21 13:02:47 truenas kernel:  ? syscall_exit_to_user_mode+0x17/0x40
Sep 21 13:02:47 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 21 13:02:47 truenas kernel:  ? exc_page_fault+0x70/0x170
Sep 21 13:02:47 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
Sep 21 13:02:47 truenas kernel: RIP: 0033:0x7f0fef7bdafb
Sep 21 13:02:47 truenas kernel: RSP: 002b:00007ffdf4dfa990 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 21 13:02:47 truenas kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0fef7bdafb
Sep 21 13:02:47 truenas kernel: RDX: 00007ffdf4dfaa50 RSI: 0000000000005a02 RDI: 0000000000000003
Sep 21 13:02:47 truenas kernel: RBP: 00007ffdf4dfe940 R08: 00007f0fef893460 R09: 00007f0fef893460
Sep 21 13:02:47 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000055d37db5fdd0
Sep 21 13:02:47 truenas kernel: R13: 00007ffdf4dfaa50 R14: 00007f0fd80097c0 R15: 000055d37db7f780
Sep 21 13:02:47 truenas kernel:  </TASK>
Sep 21 13:02:47 truenas kernel: INFO: task txg_sync:6092 blocked for more than 120 seconds.
Sep 21 13:02:47 truenas kernel:       Tainted: P           OE      6.1.42-production+truenas #2
Sep 21 13:02:47 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 21 13:02:47 truenas kernel: task:txg_sync        state:D stack:0     pid:6092  ppid:2      flags:0x00004000
Sep 21 13:02:47 truenas kernel: Call Trace:
Sep 21 13:02:47 truenas kernel:  <TASK>
Sep 21 13:02:47 truenas kernel:  __schedule+0x2ed/0x860
Sep 21 13:02:47 truenas kernel:  schedule+0x5a/0xb0
Sep 21 13:02:47 truenas kernel:  schedule_timeout+0x94/0x150
Sep 21 13:02:47 truenas kernel:  ? __bpf_trace_tick_stop+0x10/0x10
Sep 21 13:02:47 truenas kernel:  io_schedule_timeout+0x4c/0x80
Sep 21 13:02:47 truenas kernel:  __cv_timedwait_common+0x12a/0x160 [spl]
Sep 21 13:02:47 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 21 13:02:47 truenas kernel:  __cv_timedwait_io+0x15/0x20 [spl]
Sep 21 13:02:47 truenas kernel:  zio_wait+0x10b/0x220 [zfs]
Sep 21 13:02:47 truenas kernel:  dmu_buf_will_dirty_impl+0xb1/0x190 [zfs]
Sep 21 13:02:47 truenas kernel:  dmu_write_impl+0x3f/0xd0 [zfs]
Sep 21 13:02:47 truenas kernel:  dmu_write+0xb2/0x110 [zfs]
Sep 21 13:02:47 truenas kernel:  space_map_write_intro_debug+0xaf/0xe0 [zfs]
Sep 21 13:02:47 truenas kernel:  space_map_write_impl+0x54/0x250 [zfs]
Sep 21 13:02:47 truenas kernel:  ? list_head+0x9/0x30 [zfs]
Sep 21 13:02:47 truenas kernel:  ? dmu_buf_will_dirty_impl+0x11a/0x190 [zfs]
Sep 21 13:02:47 truenas kernel:  space_map_write+0x9a/0x190 [zfs]
Sep 21 13:02:47 truenas kernel:  metaslab_flush+0xed/0x320 [zfs]
Sep 21 13:02:47 truenas kernel:  ? spa_estimate_metaslabs_to_flush+0x108/0x130 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_flush_metaslabs+0x14e/0x200 [zfs]
Sep 21 13:02:47 truenas kernel:  ? preempt_count_add+0x6a/0xa0
Sep 21 13:02:47 truenas kernel:  spa_sync_iterate_to_convergence+0x157/0x200 [zfs]
Sep 21 13:02:47 truenas kernel:  spa_sync+0x306/0x5d0 [zfs]
Sep 21 13:02:47 truenas kernel:  txg_sync_thread+0x1e4/0x250 [zfs]
Sep 21 13:02:47 truenas kernel:  ? txg_dispatch_callbacks+0xf0/0xf0 [zfs]
Sep 21 13:02:47 truenas kernel:  ? sigorsets+0x10/0x10 [spl]
Sep 21 13:02:47 truenas kernel:  thread_generic_wrapper+0x57/0x70 [spl]
Sep 21 13:02:47 truenas kernel:  kthread+0xe6/0x110
Sep 21 13:02:47 truenas kernel:  ? kthread_complete_and_exit+0x20/0x20
Sep 21 13:02:47 truenas kernel:  ret_from_fork+0x1f/0x30
Sep 21 13:02:47 truenas kernel:  </TASK>
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
WARNING: Pool 'storage' has encountered an uncorrectable I/O failure and has been suspended.
This is the issue.
Can you post the output of zpool status -vx?
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
I came to the same conclusion - question is what may cause it and/or how to resolve it?

Thanks!

MCC
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
Only if i click import pool i can see it. I can't see it without that.

This is the status from command line:

Code:
admin@truenas:~$ sudo zpool status -vx
all pools are healthy
admin@truenas:~$ sudo zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool    14G  6.01G  7.99G        -         -     4%    42%  1.00x    ONLINE  -
admin@truenas:~$ sudo zpool import
   pool: storage
     id: 3743818536692763833
  state: DEGRADED
status: One or more devices were being resilvered.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
 config:

        storage                                   DEGRADED
          raidz1-0                                DEGRADED
            1dfff5e5-c3d7-4a66-a880-16a844bbd7a2  ONLINE
            21141c85-a074-4a23-98ce-b0d9a3336ceb  ONLINE
            ab95ebc6-2ed8-11ea-a963-fcaa142a1593  UNAVAIL
            923db6a0-3207-11ea-b9f1-fcaa142a1593  ONLINE
admin@truenas:~$


Thanks!
MCC
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You cannot import it from the WebUI, and you can't see it in the WebUI, correct?
What's the output of zpool status storage?
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
You cannot import it from the WebUI, and you can't see it in the WebUI, correct?
What's the output of zpool status storage?

I can't import it in WebUI - it hangs on forever and i am seeing the same set of errors in the logs.

Here is the output (nothing):

Code:
admin@truenas:~$ sudo zpool status storage
cannot open 'storage': no such pool
admin@truenas:~$



Best regards,
MCC
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hello @mcc666

Can you please describe your hardware setup, paying particular attention to the storage controller, method of attaching drives, and the drive models themselves? A RAIDZ1 with a single failed device is typically able to be imported, but your pool is throwing an I/O error which indicates that the underlying drives are in an unstable state.

You may be able to import using the zpool import -nFX option, but note that the -X parameter implies "extreme rollback" which may result in missing or partially incomplete data on your pool.
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
Here is the setup:

MB: MS-7A70 (Intel® B250 Chipset) + i5-6500t + 16GB ram

B250M BAZOOKA

Drives: Hitachi_HUA723020ALA641

BTW - i am a bit hesitatnt running zpool import -nFX option unless i explore other possibilities. I have a feeling that this may be something wrong with SATA controler on-board. I don't see any issues with boot drive though.

Best regards,
MCC
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Your disks aren't SMR, and while your motherboard uses a Realtek chipset for networking that shouldn't impact pool import. It's possible that there's other hardware issues (SATA controller, memory) at play though.

The -n option on the import tells ZFS to "no-op" or "simulate" the recovery; however, it's not clear if your previous attempts used the small -f (Force import, even if potentially active) or large -F (first level recovery mode). If the pool is damaged, -f will still result in the I/O error, and would need -F to attempt recovery. But because we want to check it out first, use the -n option as well.

Including the -n in the command won't commit any changes to the pool - it will simply tell you if it should be importable.
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
This is the outcome:

Code:
admin@truenas:~$ sudo zpool import -nFX
   pool: storage
     id: 3743818536692763833
  state: DEGRADED
status: One or more devices were being resilvered.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
 config:

        storage                                   DEGRADED
          raidz1-0                                DEGRADED
            1dfff5e5-c3d7-4a66-a880-16a844bbd7a2  ONLINE
            21141c85-a074-4a23-98ce-b0d9a3336ceb  ONLINE
            ab95ebc6-2ed8-11ea-a963-fcaa142a1593  UNAVAIL
            923db6a0-3207-11ea-b9f1-fcaa142a1593  ONLINE
admin@truenas:~$


BR,
MCC
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
Thanks a lot. This is the result (or rather lack of):

Code:
admin@truenas:~$ sudo zpool import -fF
   pool: storage
     id: 3743818536692763833
  state: DEGRADED
status: One or more devices were being resilvered.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
 config:

        storage                                   DEGRADED
          raidz1-0                                DEGRADED
            1dfff5e5-c3d7-4a66-a880-16a844bbd7a2  ONLINE
            21141c85-a074-4a23-98ce-b0d9a3336ceb  ONLINE
            ab95ebc6-2ed8-11ea-a963-fcaa142a1593  UNAVAIL
            923db6a0-3207-11ea-b9f1-fcaa142a1593  ONLINE
admin@truenas:~$ sudo zpool status
  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:18 with 0 errors on Wed Sep 20 03:45:20 2023
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sda3      ONLINE       0     0     0

errors: No known data errors
admin@truenas:~$


Best regards,
MCC
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
You'll have to actually specify the pool to import, unless you use the -a flag for "All available pools" so the command will be

zpool import -fF storage
Thanks. The fF option resulted in bunch of errors:

Code:
Sep 25 20:47:14 truenas kernel: WARNING: Pool 'storage' has encountered an uncorrectable I/O failure and has been suspended.

Sep 25 20:49:58 truenas kernel: INFO: task zpool:8701 blocked for more than 120 seconds.
Sep 25 20:49:58 truenas kernel:       Tainted: P           OE      6.1.50-production+truenas #2
Sep 25 20:49:58 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 25 20:49:58 truenas kernel: task:zpool           state:D stack:0     pid:8701  ppid:8700   flags:0x00004002
Sep 25 20:49:58 truenas kernel: Call Trace:
Sep 25 20:49:58 truenas kernel:  <TASK>
Sep 25 20:49:58 truenas kernel:  __schedule+0x2ed/0x860
Sep 25 20:49:58 truenas kernel:  schedule+0x5a/0xb0
Sep 25 20:49:58 truenas kernel:  io_schedule+0x42/0x70
Sep 25 20:49:58 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Sep 25 20:49:58 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 25 20:49:58 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Sep 25 20:49:58 truenas kernel:  txg_wait_synced+0xc/0x40 [zfs]
Sep 25 20:49:58 truenas kernel:  spa_load_impl.constprop.0+0x281/0x3c0 [zfs]
Sep 25 20:49:58 truenas kernel:  spa_load+0x64/0x120 [zfs]
Sep 25 20:49:58 truenas kernel:  spa_load_best+0x54/0x250 [zfs]
Sep 25 20:49:58 truenas kernel:  spa_import+0x22d/0x680 [zfs]
Sep 25 20:49:58 truenas kernel:  zfs_ioc_pool_import+0x157/0x180 [zfs]
Sep 25 20:49:58 truenas kernel:  zfsdev_ioctl_common+0x67c/0x770 [zfs]
Sep 25 20:49:58 truenas kernel:  ? __kmalloc_node+0xbf/0x150
Sep 25 20:49:58 truenas kernel:  zfsdev_ioctl+0x4f/0xd0 [zfs]
Sep 25 20:49:58 truenas kernel:  __x64_sys_ioctl+0x8d/0xd0
Sep 25 20:49:58 truenas kernel:  do_syscall_64+0x58/0xc0
Sep 25 20:49:58 truenas kernel:  ? exit_to_user_mode_prepare+0x175/0x1c0
Sep 25 20:49:58 truenas kernel:  ? syscall_exit_to_user_mode+0x27/0x40
Sep 25 20:49:58 truenas kernel:  ? do_syscall_64+0x67/0xc0
Sep 25 20:49:58 truenas kernel:  ? exc_page_fault+0x70/0x170
Sep 25 20:49:58 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x64/0xce
Sep 25 20:49:58 truenas kernel: RIP: 0033:0x7f184a097afb
Sep 25 20:49:58 truenas kernel: RSP: 002b:00007ffd145a7d50 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Sep 25 20:49:58 truenas kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f184a097afb
Sep 25 20:49:58 truenas kernel: RDX: 00007ffd145a7e10 RSI: 0000000000005a02 RDI: 0000000000000003
Sep 25 20:49:58 truenas kernel: RBP: 00007ffd145abd00 R08: 00007f184a16d460 R09: 00007f184a16d460
Sep 25 20:49:58 truenas kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005633bf4f3dd0
Sep 25 20:49:58 truenas kernel: R13: 00007ffd145a7e10 R14: 00007f1830006230 R15: 00005633bf5147b0
Sep 25 20:49:58 truenas kernel:  </TASK>
Sep 25 20:49:58 truenas kernel: INFO: task txg_sync:8866 blocked for more than 120 seconds.
Sep 25 20:49:58 truenas kernel:       Tainted: P           OE      6.1.50-production+truenas #2
Sep 25 20:49:58 truenas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 25 20:49:58 truenas kernel: task:txg_sync        state:D stack:0     pid:8866  ppid:2      flags:0x00004000
Sep 25 20:49:58 truenas kernel: Call Trace:
Sep 25 20:49:58 truenas kernel:  <TASK>
Sep 25 20:49:58 truenas kernel:  __schedule+0x2ed/0x860
Sep 25 20:49:58 truenas kernel:  schedule+0x5a/0xb0
Sep 25 20:49:58 truenas kernel:  schedule_timeout+0x94/0x150
Sep 25 20:49:58 truenas kernel:  ? __bpf_trace_tick_stop+0x10/0x10
Sep 25 20:49:58 truenas kernel:  io_schedule_timeout+0x4c/0x80
Sep 25 20:49:58 truenas kernel:  __cv_timedwait_common+0x12a/0x160 [spl]
Sep 25 20:49:58 truenas kernel:  ? cpuusage_read+0x10/0x10
Sep 25 20:49:58 truenas kernel:  __cv_timedwait_io+0x15/0x20 [spl]
Sep 25 20:49:58 truenas kernel:  zio_wait+0x10b/0x220 [zfs]
Sep 25 20:49:58 truenas kernel:  dmu_buf_will_dirty_impl+0xb1/0x190 [zfs]
Sep 25 20:49:58 truenas kernel:  dmu_write_impl+0x3f/0xd0 [zfs]
Sep 25 20:49:58 truenas kernel:  dmu_write+0xb2/0x110 [zfs]
Sep 25 20:49:58 truenas kernel:  space_map_write_intro_debug+0xaf/0xe0 [zfs]
Sep 25 20:49:58 truenas kernel:  space_map_write_impl+0x54/0x250 [zfs]
Sep 25 20:49:58 truenas kernel:  ? list_head+0x9/0x30 [zfs]
Sep 25 20:49:58 truenas kernel:  ? dmu_buf_will_dirty_impl+0x11a/0x190 [zfs]
Sep 25 20:49:58 truenas kernel:  space_map_write+0x9a/0x190 [zfs]
Sep 25 20:49:58 truenas kernel:  metaslab_flush+0xed/0x320 [zfs]
Sep 25 20:49:58 truenas kernel:  ? spa_estimate_metaslabs_to_flush+0x108/0x130 [zfs]
Sep 25 20:49:58 truenas kernel:  spa_flush_metaslabs+0x14e/0x200 [zfs]
Sep 25 20:49:58 truenas kernel:  ? preempt_count_add+0x6a/0xa0
Sep 25 20:49:58 truenas kernel:  spa_sync_iterate_to_convergence+0x157/0x200 [zfs]
Sep 25 20:49:58 truenas kernel:  spa_sync+0x306/0x5d0 [zfs]
Sep 25 20:49:58 truenas kernel:  txg_sync_thread+0x1e4/0x250 [zfs]
Sep 25 20:49:58 truenas kernel:  ? txg_dispatch_callbacks+0xf0/0xf0 [zfs]
Sep 25 20:49:58 truenas kernel:  ? sigorsets+0x10/0x10 [spl]
Sep 25 20:49:58 truenas kernel:  thread_generic_wrapper+0x57/0x70 [spl]
Sep 25 20:49:58 truenas kernel:  kthread+0xe6/0x110
Sep 25 20:49:58 truenas kernel:  ? kthread_complete_and_exit+0x20/0x20
Sep 25 20:49:58 truenas kernel:  ret_from_fork+0x1f/0x30
Sep 25 20:49:58 truenas kernel:  </TASK>




Do you think I shall try fFX or seek for buy new SATA controler?

BR,
MCC
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You'll need to reboot after that. You may also want to check your system using memtest or a similar program to see if it identifies any faults with CPU/RAM - but the fact that it doesn't throw a fault until pool import means it could be damage to the pool itself. -fFX may work here.
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
You'll need to reboot after that. You may also want to check your system using memtest or a similar program to see if it identifies any faults with CPU/RAM - but the fact that it doesn't throw a fault until pool import means it could be damage to the pool itself. -fFX may work here.
I see the same kernel errors. There is no activity on disks happening though.

Will run some thorough tests to see what's going on.


BR,
 

mcc666

Dabbler
Joined
Sep 22, 2021
Messages
20
Memtest run over the night - 0 errors... SMART long tests 0 errors...

It's a bit odd

MCC
 
Top