Highly Unstable after 'upgrade' to TrueNAS 12 RC1

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Yes that's it. All of my Dell and IBM servers have that feature disabled by default but my Supermicro which I use for pfSense had this feature enabled by default and was causing kernel panics and reboots. Dissable memory scrub, bios watchdog and hardware watchdog.

Interesting...

A) Relating to Patrol Scrub being enabled (default). I have not had issues in the year plus I have been running this box with FreeNAS or any other OS, with panic/reboots due to this setting.

B) Watchdog in my case comes disabled by default (jumper) - I'm not a fan of systems automatically rebooting if they think the box is locked up so I've left it this way...I've had instances in the past where that happens and the system is otherwise running w/o issue.
 

Brezlord

Contributor
Joined
Jan 7, 2017
Messages
189
Have you run diagnostic tests on the system to see if there are any issues with the hardware? Run the server with no vms/jails and see if the issue goes away and if it does start 1 jail at a time and see if it causes a crash.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
The VMs do seem to be the root of the issue as they lock up first before the whole system does. I'm not running anything unusual though. Just Debian 10 all up to the latest patch versions.

I'll give it a go... I'll try anything at this point!
 

Brezlord

Contributor
Joined
Jan 7, 2017
Messages
189
I don't run any VMs/jails on my TrueNAS so don't have any experience with them.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
No luck with this. Moved all VMs to Ubuntu as other users suggested Ubuntu was working better as a VM guest than other OSs right now.

Still getting panics approx every 24 hours. Mixed causes now - sometimes "spin lock held too long" sometimes "page fault".

Code:
Dump header from device: /dev/da5p1
  Architecture: amd64
  Architecture Version: 4
  Dump Length: 1581568
  Blocksize: 512
  Compression: none
  Dumptime: Wed Oct 21 12:56:04 2020
  Hostname: nas.lan
  Magic: FreeBSD Text Dump
  Version String: FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS
  Panic String: page fault
  Dump Parity: 2926693488
  Bounds: 0
  Dump Status: good

Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address    = 0x18
fault code        = supervisor write data, page not present
instruction pointer    = 0x20:0xffffffff80a0df4f
stack pointer            = 0x28:0xfffffe00004dba60
frame pointer            = 0x28:0xfffffe00004dba80
code segment        = base 0x0, limit 0xfffff, type 0x1b
            = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags    = interrupt enabled, resume, IOPL = 0
current process        = 12 (swi4: clock (5))
trap number        = 12
panic: page fault
cpuid = 1
time = 1603281364
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00004db720
vpanic() at vpanic+0x17b/frame 0xfffffe00004db770
panic() at panic+0x43/frame 0xfffffe00004db7d0
trap_fatal() at trap_fatal+0x391/frame 0xfffffe00004db830
trap_pfault() at trap_pfault+0x4f/frame 0xfffffe00004db880
trap() at trap+0x286/frame 0xfffffe00004db990
calltrap() at calltrap+0x8/frame 0xfffffe00004db990
--- trap 0xc, rip = 0xffffffff80a0df4f, rsp = 0xfffffe00004dba60, rbp = 0xfffffe00004dba80 ---
filt_timerexpire() at filt_timerexpire+0x2f/frame 0xfffffe00004dba80
softclock_call_cc() at softclock_call_cc+0x141/frame 0xfffffe00004dbb30
softclock() at softclock+0x79/frame 0xfffffe00004dbb50
ithread_loop() at ithread_loop+0x23c/frame 0xfffffe00004dbbb0
fork_exit() at fork_exit+0x7e/frame 0xfffffe00004dbbf0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00004dbbf0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Any ideas anyone?! This is making my life miserable.

Today I tried stripping down the whole system and rebuilding it from scratch, just incase there was a loose connection or similar, while I was doing that I replaced the PSU with a more powerful one as I read on a random post somewhere that under-voltage can cause random kernel panics.

No luck, system crashed after an hour of uptime.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Oh my days. Now it won't even boot. Before you ask - not this is not a USB drive, freenas-boot is a mirrored pair of internal SSDs wired direct to the motherboard SATA headers.
1603724781350.png


1603724795943.png
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Well my next step was going to be to re-install TrueNAS from scratch. So I guess this just accelerated that next step...
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
OK. So we are back online, fresh install on TrueNAS 12 RELEASE (formatted boot drives), restored from my last 11.3 backup - few errors flew by on migration but wasn't fast enough to capture them - are they kept anywhere?

Will see if this is any more stable.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
No joy. Crashed again early this morning.

I've noticed that whenever the VMs crash it also locks up the network interface they bridged to and annecdotally, they seem to crash during periods of increased IO on the VMs (network or disk).
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
5 Kernel panics in the last 24 hours. I'm at my wits end, someone please help me!

Code:
Dump header from device: /dev/ada1p3
  Architecture: amd64
  Architecture Version: 4
  Dump Length: 1750016
  Blocksize: 512
  Compression: none
  Dumptime: Mon Nov  2 08:20:15 2020
  Hostname: nas.lan
  Magic: FreeBSD Text Dump
  Version String: FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS
  Panic String: Bad tailq NEXT(0xffffffff82135158->tqh_last) != NULL
  Dump Parity: 899386206
  Bounds: 2
  Dump Status: good


Code:
Dump header from device: /dev/ada1p3
  Architecture: amd64
  Architecture Version: 4
  Dump Length: 1768448
  Blocksize: 512
  Compression: none
  Dumptime: Mon Nov  2 00:22:01 2020
  Hostname: nas.lan
  Magic: FreeBSD Text Dump
  Version String: FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS
  Panic String: Bad tailq NEXT(0xffffffff82134d98->tqh_last) != NULL
  Dump Parity: 794724958
  Bounds: 1
  Dump Status: good


Code:
Dump header from device: /dev/ada1p3
  Architecture: amd64
  Architecture Version: 4
  Dump Length: 1757696
  Blocksize: 512
  Compression: none
  Dumptime: Sun Nov  1 13:51:27 2020
  Hostname: nas.lan
  Magic: FreeBSD Text Dump
  Version String: FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS
  Panic String: Bad tailq NEXT(0xffffffff82135518->tqh_last) != NULL
  Dump Parity: 1373866590
  Bounds: 0
  Dump Status: good


Code:
Dump header from device: /dev/ada1p3
  Architecture: amd64
  Architecture Version: 4
  Dump Length: 1752064
  Blocksize: 512
  Compression: none
  Dumptime: Sun Nov  1 10:48:57 2020
  Hostname: nas.lan
  Magic: FreeBSD Text Dump
  Version String: FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS
  Panic String: Bad tailq NEXT(0xffffffff82134d98->tqh_last) != NULL
  Dump Parity: 2677902174
  Bounds: 4
  Dump Status: good


Code:
Dump header from device: /dev/ada1p3
  Architecture: amd64
  Architecture Version: 4
  Dump Length: 1751040
  Blocksize: 512
  Compression: none
  Dumptime: Sun Nov  1 08:33:26 2020
  Hostname: nas.lan
  Magic: FreeBSD Text Dump
  Version String: FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS
  Panic String: Bad tailq NEXT(0xffffffff82134d98->tqh_last) != NULL
  Dump Parity: 3497888606
  Bounds: 3
  Dump Status: good
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Can you find crashdumps in /data/crash?
I browsed the code and while TrueNAS does insist of doing everything manually, the necessary bits seem to get set up at boot. It's just hat they don't save the dumps in /var/crash, because that is part of the memory disk.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Patrick - only text-dumps, not full core dumps (vmcores).

I think that you can use sysctl to disable text-dumps (which should in theory default back to vmcore dumps) using command:

sysctl debug.ddb.textdump.pending=0

So hoping I get a proper core dump on the next crash.. watch this space.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Didn't work. Still only text-dumps in the /data/crash directory.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Having enabled the Debug kernel I am seeing a LOT of Lock Order Reversals notifications in the message buffer related to ZFS modules. I am certain these cannot be normal.

Code:
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028975228 db->db_rwlock (db->db_rwlock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:1256
Nov  2 01:53:50 nas 2nd 0xfffff80028979938 dn->dn_mtx (dn->dn_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dnode.c:2303
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82cbe168 at dnode_block_freed+0xb8
Nov  2 01:53:50 nas #3 0xffffffff82c8cf7a at dbuf_read_impl+0x3ca
Nov  2 01:53:50 nas #4 0xffffffff82c834d2 at dbuf_read_+0x622
Nov  2 01:53:50 nas #5 0xffffffff82cbeb08 at dnode_hold_impl+0x2e8
Nov  2 01:53:50 nas #6 0xffffffff82c9a978 at dmu_buf_hold_noread+0x28
Nov  2 01:53:50 nas #7 0xffffffff82c9ab7c at dmu_buf_hold+0x1c
Nov  2 01:53:50 nas #8 0xffffffff82dd7fae at zap_lockdir+0x2e
Nov  2 01:53:50 nas #9 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #10 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #11 0xffffffff82d304ba at spa_ld_trusted_config+0x4a
Nov  2 01:53:50 nas #12 0xffffffff82d2eeb3 at spa_ld_mos_with_trusted_config+0x33
Nov  2 01:53:50 nas #13 0xffffffff82d2d07c at spa_load_impl+0x7c
Nov  2 01:53:50 nas #14 0xffffffff82d256d3 at spa_load+0x53
Nov  2 01:53:50 nas #15 0xffffffff82d24e35 at spa_load_best+0x65
Nov  2 01:53:50 nas #16 0xffffffff82d21041 at spa_open_common+0x131
Nov  2 01:53:50 nas #17 0xffffffff82cebb21 at dsl_pool_hold+0x21
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028622e58 db->db_mtx (db->db_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:2819
Nov  2 01:53:50 nas 2nd 0xfffff80028618528 dn->dn_mtx (dn->dn_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:2847
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c8eccf at dbuf_dirty_compute_state+0x11f
Nov  2 01:53:50 nas #3 0xffffffff82c84fbf at dbuf_dirty+0xdf
Nov  2 01:53:50 nas #4 0xffffffff82ccac86 at dsl_dataset_sync+0x56
Nov  2 01:53:50 nas #5 0xffffffff82cea1fa at dsl_pool_sync+0xca
Nov  2 01:53:50 nas #6 0xffffffff82d2a9c4 at spa_sync+0x1014
Nov  2 01:53:50 nas #7 0xffffffff82d42ce4 at txg_sync_thread+0x484
Nov  2 01:53:50 nas #8 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #9 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas lock[2840]: Last message 'order reversal:' repeated 2 times, suppressed by syslog-ng on nas.lan
Nov  2 01:53:50 nas 1st 0xfffff8002897c800 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs 1st 0xfffff8002897c840 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfslock order reversal:
Nov  2 01:53:50 nas /multilist.c:288
Nov  2 01:53:50 nas 2nd 0xfffff80028618528 dn->dn_mtx (dn->dn_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1513
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas 1st 0xfffff8002897c940 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/multilist.c:288
Nov  2 01:53:50 nas 2nd 0xfffff80021f0d000 dn->dn_struct_rwlock (dn->dn_struct_rwlock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dnode.c:275
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas 1st 0xfffff8002897c880 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/multilist.c:288
Nov  2 01:53:50 nas 2nd 0xfffff80004793190 buf->b_evict_lock (buf->b_evict_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/arc.c:6671
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas /multilist.c:288
Nov  2 01:53:50 nas 2nd 0xfffff80028621098 db->db_mtx (db->db_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:5537
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witnes#0 0xffffffff80ae0d81 at witnes#0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80s_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80#0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a81ec7 at _sx_slock_int+0x67
Nov  2 01:53:50 nas #2 0xffffffff82cbbf18 at dnode_vea80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c7260a at arc_releasedrify+0x98
Nov  2 01:53:50 nas #3 0xffffffff82cc26ce+0x2a
Nov  2 01:53:50 nas #3 0xffffffff82cc26e4 at s_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82ca3aad at sync_dnodes_ at dnode_sync+0x7e
Nov  2 01:53:50 nas #4 0xffffffff82ca39df at sync_dnodes_task+dnode_sync+0x94
Nov  2 01:53:50 nas #4 0xffffffff82ca39df at sync_dnodes_task+0x5f0x5f
Nov  2 01:53:50 nas #5 0xffffffff82c21b2f at t
Nov  2 01:53:50 nas a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #5 0xffffffff82c21b2f at taskq#2 0xffffffff82c8bd5c at dbuf_sync_letask+0x12d
Nov  2 01:53:50 nas #3 0xffffffff82c21b2askq_run+0x1f
Nov  2 01:53:50 nas #6 0xffffffff80ad38a8 at taskqueue_run_locked+0x_run+0x1f
Nov  2 01:53:50 nas #6 0xffffffff80ad38a8af+0x16c
Nov  2 01:53:50 nas #3 0xffffffff82c8b7bb 168
Nov  2 01:53:50 nas #7 0xffffffff80ad4824 at ta at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #7 0xffffffff80ad4824 at taskquf at taskq_run+0x1f
Nov  2 01:53:50 nas #4 0xffffffff80ad38a8 at taskqueue_run_locskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #8 0xffffffff80a378d0 at fork_exit+0xeue_thread_loop+0x94
Nov  2 01:53:50 nas #8 0xfffffat dbuf_sync_list+0xbb
Nov  2 01:53:50 nas #4 0xffffffff82cc3567 at dnode_sync+0xfked+0x168
Nov  2 01:53:50 nas #5 0xffffffff80ad482480
Nov  2 01:53:50 nas #9 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas fff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #9 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas 17
Nov  2 01:53:50 nas #5 0xffffffff82ca39df at syn at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #6 0xffffffff80a378d0 at fork_ec_dnodes_task+0x5f
Nov  2 01:53:50 nas #6 0xfffffffxit+0x80
Nov  2 01:53:50 nas #7 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas f82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #7 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #8 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #9 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #10 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff8002897c980 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/multilist.c:288
Nov  2 01:53:50 nas 2nd 0xfffff8002826b000 ms->ms_lock (ms->ms_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/metaslab.c:5992
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82d09627 at metaslab_check_free_impl+0x107
Nov  2 01:53:50 nas #3 0xffffffff82d0b74e at metaslab_check_free+0x5e
Nov  2 01:53:50 nas #4 0xffffffff82e03e17 at zio_free+0x47
Nov  2 01:53:50 nas #5 0xffffffff82cc6015 at dsl_dataset_block_kill+0x725
Nov  2 01:53:50 nas #6 0xffffffff82cc3bd2 at free_blocks+0xd2
Nov  2 01:53:50 nas #7 0xffffffff82cc3fa4 at dnode_sync_free_range+0xf4
Nov  2 01:53:50 nas #8 0xffffffff82d15d72 at range_tree_walk+0x62
Nov  2 01:53:50 nas #9 0xffffffff82cc2c2a at dnode_sync+0x5da
Nov  2 01:53:50 nas #10 0xffffffff82ca39df at sync_dnodes_task+0x5f
Nov  2 01:53:50 nas #11 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #12 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #13 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #14 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #15 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff8002897c980 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/multilist.c:288
Nov  2 01:53:50 nas 2nd 0xffffffff82ff5bf8 buf_hash_table.ht_locks.ht_lock (buf_hash_table.ht_locks.ht_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/arc.c:1012
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c6d9aa at buf_hash_find+0xba
Nov  2 01:53:50 nas #3 0xffffffff82c7063f at arc_freed+0x2f
Nov  2 01:53:50 nas #4 0xffffffff82e03f3c at zio_free_sync+0x6c
Nov  2 01:53:50 nas #5 0xffffffff82e03e6b at zio_free+0x9b
Nov  2 01:53:50 nas #6 0xffffffff82cc6015 at dsl_dataset_block_kill+0x725
Nov  2 01:53:50 nas #7 0xffffffff82cc3bd2 at free_blocks+0xd2
Nov  2 01:53:50 nas #8 0xffffffff82cc3fa4 at dnode_sync_free_range+0xf4
Nov  2 01:53:50 nas #9 0xffffffff82d15d72 at range_tree_walk+0x62
Nov  2 01:53:50 nas #10 0xffffffff82cc2c2a at dnode_sync+0x5da
Nov  2 01:53:50 nas #11 0xffffffff82ca39df at sync_dnodes_task+0x5f
Nov  2 01:53:50 nas #12 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #13 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #14 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #15 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #16 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff8002897c980 mls->mls_lock (mls->mls_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/multilist.c:288
Nov  2 01:53:50 nas 2nd 0xfffff80028bea728 dn->dn_dbufs_mtx (dn->dn_dbufs_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dnode_sync.c:463
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82cc2421 at dnode_evict_dbufs+0x41
Nov  2 01:53:50 nas #3 0xffffffff82cc2d20 at dnode_sync+0x6d0
Nov  2 01:53:50 nas #4 0xffffffff82ca39df at sync_dnodes_task+0x5f
Nov  2 01:53:50 nas #5 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #6 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #7 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #8 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #9 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028daaba8 zfs (zfs) @ /truenas-releng/freenas/_BE/os/sys/kern/vfs_lookup.c:693
Nov  2 01:53:50 nas 2nd 0xfffffe00c87e8870 zfsvfs->z_hold_mtx (zfsvfs->z_hold_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/os/freebsd/zfs/zfs_znode.c:947
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c4d4e6 at zfs_zget+0x56
Nov  2 01:53:50 nas #3 0xffffffff82c33ba9 at zfs_dirent_lookup+0x169
Nov  2 01:53:50 nas #4 0xffffffff82c33d0b at zfs_dirlook+0xdb
Nov  2 01:53:50 nas #5 0xffffffff82c4a4e0 at zfs_lookup+0x440
Nov  2 01:53:50 nas #6 0xffffffff82c42d62 at zfs_freebsd_cachedlookup+0x72
Nov  2 01:53:50 nas #7 0xffffffff811ca01e at VOP_CACHEDLOOKUP_APV+0xce
Nov  2 01:53:50 nas #8 0xffffffff80b367cc at vfs_cache_lookup+0xac
Nov  2 01:53:50 nas #9 0xffffffff811c9e1e at VOP_LOOKUP_APV+0xce
Nov  2 01:53:50 nas #10 0xffffffff80b3fba1 at lookup+0x491
Nov  2 01:53:50 nas #11 0xffffffff80b3f269 at namei+0x459
Nov  2 01:53:50 nas #12 0xffffffff80b4698a at vfs_mountroot+0x10ea
Nov  2 01:53:50 nas #13 0xffffffff80a0f047 at start_init+0x27
Nov  2 01:53:50 nas #14 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #15 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas ums0 on uhub6
Nov  2 01:53:50 nas ums0: <vendor 0x0557 product 0x2419, class 0/0, rev 1.10/1.00, addr 3> on usbus0
Nov  2 01:53:50 nas ums0: 3 buttons and [Z] coordinates ID=0
Nov  2 01:53:50 nas kernel: lo0: link state changed to UP
Nov  2 01:53:50 nas warning: KLD '/boot/kernel-debug/vmm.ko' is newer than the linker.hints file
Nov  2 01:53:50 nas warning: KLD '/boot/kernel-debug/opensolaris.ko' is newer than the linker.hints file
Nov  2 01:53:50 nas warning: KLD '/boot/kernel-debug/sdt.ko' is newer than the linker.hints file
Nov  2 01:53:50 nas warning: KLD '/boot/kernel-debug/systrace.ko' is newer than the linker.hints file
Nov  2 01:53:50 nas warning: KLD '/boot/kernel-debug/systrace_freebsd32.ko' is newer than the linker.hints file
Nov  2 01:53:50 nas warning: KLD '/boot/kernel-debug/profile.ko' is newer than the linker.hints file
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff802d27e4150 zcw->zcw_lock (zcw->zcw_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/zil.c:2517
Nov  2 01:53:50 nas 2nd 0xfffff80004779400 zilog->zl_lock (zilog->zl_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/zil.c:565
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82e00cd3 at zil_alloc_lwb+0x193
Nov  2 01:53:50 nas #3 0xffffffff82e0179d at zil_lwb_write_issue+0x3fd
Nov  2 01:53:50 nas #4 0xffffffff82dfeab9 at zil_commit_impl+0x1a89
Nov  2 01:53:50 nas #5 0xffffffff82c46ef3 at zfs_freebsd_fsync+0xd3
Nov  2 01:53:50 nas #6 0xffffffff811cc074 at VOP_FSYNC_APV+0xd4
Nov  2 01:53:50 nas #7 0xffffffff80b5a3de at kern_fsync+0x17e
Nov  2 01:53:50 nas #8 0xffffffff80feb34e at amd64_syscall+0x2be
Nov  2 01:53:50 nas #9 0xffffffff80fc1d4e at fast_syscall_common+0xf8
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffffe00cc6079b8 vd->vdev_dtl_lock (vd->vdev_dtl_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/vdev.c:2910
Nov  2 01:53:50 nas 2nd 0xfffff8028da2e820 dn->dn_struct_rwlock (dn->dn_struct_rwlock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu.c:1609
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a81ec7 at _sx_slock_int+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c9b409 at dmu_prefetch+0x149
Nov  2 01:53:50 nas #3 0xffffffff82d3f1e6 at space_map_iterate+0xd6
Nov  2 01:53:50 nas #4 0xffffffff82d3ff03 at space_map_load_length+0x83
Nov  2 01:53:50 nas #5 0xffffffff82d4a95f at vdev_dtl_load+0xdf
Nov  2 01:53:50 nas #6 0xffffffff82d4c93e at vdev_load+0x1fe
Nov  2 01:53:50 nas #7 0xffffffff82d4c780 at vdev_load+0x40
Nov  2 01:53:50 nas #8 0xffffffff82d4c780 at vdev_load+0x40
Nov  2 01:53:50 nas #9 0xffffffff82d2e475 at spa_load_impl+0x1475
Nov  2 01:53:50 nas #10 0xffffffff82d256d3 at spa_load+0x53
Nov  2 01:53:50 nas #11 0xffffffff82d2535e at spa_tryimport+0x17e
Nov  2 01:53:50 nas #12 0xffffffff82df1c9d at zfs_ioc_pool_tryimport+0x3d
Nov  2 01:53:50 nas #13 0xffffffff82dea0c9 at zfsdev_ioctl_common+0x489
Nov  2 01:53:50 nas #14 0xffffffff82c28c56 at zfsdev_ioctl+0x146
Nov  2 01:53:50 nas #15 0xffffffff80909bd5 at devfs_ioctl+0xb5
Nov  2 01:53:50 nas #16 0xffffffff811cb934 at VOP_IOCTL_APV+0xd4
Nov  2 01:53:50 nas #17 0xffffffff80b5ca5d at vn_ioctl+0x13d
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1825
Nov  2 01:53:50 nas 2nd 0xfffff800287a9820 dn->dn_struct_rwlock (dn->dn_struct_rwlock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dnode.c:275
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a81ec7 at _sx_slock_int+0x67
Nov  2 01:53:50 nas #2 0xffffffff82cbbf18 at dnode_verify+0x98
Nov  2 01:53:50 nas #3 0xffffffff82cbe984 at dnode_hold_impl+0x164
Nov  2 01:53:50 nas #4 0xffffffff82c9a978 at dmu_buf_hold_noread+0x28
Nov  2 01:53:50 nas #5 0xffffffff82c9ab7c at dmu_buf_hold+0x1c
Nov  2 01:53:50 nas #6 0xffffffff82dd7fae at zap_lockdir+0x2e
Nov  2 01:53:50 nas #7 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #8 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #9 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #10 0xffffffff82ca4344 at userquota_updates_task+0x524
Nov  2 01:53:50 nas #11 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #12 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #13 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #14 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #15 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1825
Nov  2 01:53:50 nas 2nd 0xfffff800287a9938 dn->dn_mtx (dn->dn_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dnode.c:1601
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82cc0027 at dnode_rele+0x27
Nov  2 01:53:50 nas #3 0xffffffff82c9a9fb at dmu_buf_hold_noread+0xab
Nov  2 01:53:50 nas #4 0xffffffff82c9ab7c at dmu_buf_hold+0x1c
Nov  2 01:53:50 nas #5 0xffffffff82dd7fae at zap_lockdir+0x2e
Nov  2 01:53:50 nas #6 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #7 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #8 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #9 0xffffffff82ca4344 at userquota_updates_task+0x524
Nov  2 01:53:50 nas #10 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #11 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #12 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #13 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #14 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1825
Nov  2 01:53:50 nas 2nd 0xfffff802f2030e58 db->db_mtx (db->db_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:2113
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c82fa4 at dbuf_read_+0xf4
Nov  2 01:53:50 nas #3 0xffffffff82c9aba9 at dmu_buf_hold+0x49
Nov  2 01:53:50 nas #4 0xffffffff82dd7fae at zap_lockdir+0x2e
Nov  2 01:53:50 nas #5 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #6 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #7 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #8 0xffffffff82ca4344 at userquota_updates_task+0x524
Nov  2 01:53:50 nas #9 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #10 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #11 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #12 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #13 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1825
Nov  2 01:53:50 nas 2nd 0xffffffff82fe5bf8 buf_hash_table.ht_locks.ht_lock (buf_hash_table.ht_locks.ht_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/arc.c:1012
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c6d9aa at buf_hash_find+0xba
Nov  2 01:53:50 nas #3 0xffffffff82c6bab3 at arc_read+0x1b3
Nov  2 01:53:50 nas #4 0xffffffff82c8d8e0 at dbuf_read_impl+0xd30
Nov  2 01:53:50 nas #5 0xffffffff82c834d2 at dbuf_read_+0x622
Nov  2 01:53:50 nas #6 0xffffffff82c9aba9 at dmu_buf_hold+0x49
Nov  2 01:53:50 nas #7 0xffffffff82dd7fae at zap_lockdir+0x2e
Nov  2 01:53:50 nas #8 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #9 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #10 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #11 0xffffffff82ca4344 at userquota_updates_task+0x524
Nov  2 01:53:50 nas #12 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #13 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #14 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #15 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #16 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1825
Nov  2 01:53:50 nas 2nd 0xfffffe00c7d149b8 vd->vdev_dtl_lock (vd->vdev_dtl_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/vdev.c:2599
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82d4af77 at vdev_dtl_contains+0x67
Nov  2 01:53:50 nas #3 0xffffffff82d5d41a at vdev_mirror_child_select+0xda
Nov  2 01:53:50 nas #4 0xffffffff82d5c747 at vdev_mirror_io_start+0x57
Nov  2 01:53:50 nas #5 0xffffffff82e0d149 at zio_vdev_io_start+0x439
Nov  2 01:53:50 nas #6 0xffffffff82e0448c at zio_nowait+0x12c
Nov  2 01:53:50 nas #7 0xffffffff82c6ca1e at arc_read+0x111e
Nov  2 01:53:50 nas #8 0xffffffff82c8d8e0 at dbuf_read_impl+0xd30
Nov  2 01:53:50 nas #9 0xffffffff82c834d2 at dbuf_read_+0x622
Nov  2 01:53:50 nas #10 0xffffffff82c9aba9 at dmu_buf_hold+0x49
Nov  2 01:53:50 nas #11 0xffffffff82dd7fae at zap_lockdir+0x2e
Nov  2 01:53:50 nas #12 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #13 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #14 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #15 0xffffffff82ca4344 at userquota_updates_task+0x524
Nov  2 01:53:50 nas #16 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #17 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1825
Nov  2 01:53:50 nas 2nd 0xfffff801ef12a4f0 zap->zap_rwlock (zap->zap_rwlock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/zap_micro.c:421
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82dd81c9 at zap_lockdir_impl+0x159
Nov  2 01:53:50 nas #3 0xffffffff82dd803e at zap_lockdir+0xbe
Nov  2 01:53:50 nas #4 0xffffffff82dd974c at zap_lookup_norm+0x3c
Nov  2 01:53:50 nas #5 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #6 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #7 0xffffffff82ca4344 at userquota_updates_task+0x524
Nov  2 01:53:50 nas #8 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #9 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #10 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #11 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #12 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1848
Nov  2 01:53:50 nas 2nd 0xfffff80028e31b38 dn->dn_dbufs_mtx (dn->dn_dbufs_mtx) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:4166
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c86ec8 at dbuf_destroy+0x278
Nov  2 01:53:50 nas #3 0xffffffff82dd97d1 at zap_lookup_norm+0xc1
Nov  2 01:53:50 nas #4 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #5 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #6 0xffffffff82ca4534 at userquota_updates_task+0x714
Nov  2 01:53:50 nas #7 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #8 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #9 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #10 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #11 0xffffffff80fc245e at fork_trampoline+0xe
Nov  2 01:53:50 nas lock order reversal:
Nov  2 01:53:50 nas 1st 0xfffff80028be5bb8 os->os_userused_lock (os->os_userused_lock) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dmu_objset.c:1848
Nov  2 01:53:50 nas 2nd 0xffffffff83026ef0 h->hash_mutexes (h->hash_mutexes) @ /wrkdirs/usr/ports/sysutils/openzfs-kmod/work/zfs-091aa0122/module/zfs/dbuf.c:508
Nov  2 01:53:50 nas stack backtrace:
Nov  2 01:53:50 nas #0 0xffffffff80ae0d81 at witness_debugger+0x71
Nov  2 01:53:50 nas #1 0xffffffff80a80e07 at _sx_xlock+0x67
Nov  2 01:53:50 nas #2 0xffffffff82c86fab at dbuf_destroy+0x35b
Nov  2 01:53:50 nas #3 0xffffffff82dd97d1 at zap_lookup_norm+0xc1
Nov  2 01:53:50 nas #4 0xffffffff82dd9701 at zap_lookup+0x11
Nov  2 01:53:50 nas #5 0xffffffff82dd36d1 at zap_increment+0x41
Nov  2 01:53:50 nas #6 0xffffffff82ca4534 at userquota_updates_task+0x714
Nov  2 01:53:50 nas #7 0xffffffff82c21b2f at taskq_run+0x1f
Nov  2 01:53:50 nas #8 0xffffffff80ad38a8 at taskqueue_run_locked+0x168
Nov  2 01:53:50 nas #9 0xffffffff80ad4824 at taskqueue_thread_loop+0x94
Nov  2 01:53:50 nas #10 0xffffffff80a378d0 at fork_exit+0x80
Nov  2 01:53:50 nas #11 0xffffffff80fc245e at fork_trampoline+0xe
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Just an update to wrap up this thread for anyone who finds it in the future!

The kernel panic issue was raised as a bug on FreeBSD. The devs were unable to reproduce or debug without me providing full kernel-state core dumps (which is remarkably hard to get within TrueNAS but thats another story). Then by complete chance, a week or so later the FreeBSD Kernel Fuzzer found the exact same bug - which allowed them to create a fix. I tested the patch and confirmed it was a good fix and the commit has been pulled into the upcoming TrueNAS 12.0-U1 release.

Thanks to everyone on the FreeBSD and TrueNAS teams for this! I can sleep easy again :smile:
 

taney

Dabbler
Joined
Dec 5, 2012
Messages
12
Hi @seb101 ! Are you still having the issues? I've been experiencing this issue for months on the latest TrueNAS updates. I have the same motherboard as you and it's driving me nuts! I followed all your updates on the other pages as well but haven't found a solution. The only thing I haven't tried is open a ticket but I figured that you could offer some insight.
 

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Hi! I had no further issues related to this once the FreeBSD fix was rolled into 12.0-U1, the system has been 100% stable ever since.
 

taney

Dabbler
Joined
Dec 5, 2012
Messages
12
I may have figured this one out!

I decided to roll back a release update, remove all tunable options, update to latest version and allow the update to rebuild the tunables. The NAS has been up for over 3 days without a reboot (the longest it's been since we've had the issue).

Not sure which tunable is/was causing the issues.
 
Top