truenas kbd panic- help needed

suzdal

Cadet
Joined
Jan 31, 2013
Messages
7
Hi.

I'm runnin a truenas core (TrueNAS-12.0-U2.1) i had it for many years and it works perfectly, so a couple of months from now i decided to move my physical seafile to a virtual machine debian 10 inside the freenas.
i had to explain it, freenas is connected tru a nfs mount point to seafile... it was running fine since this afternoon when truenas give a panic. and now it is in a boot loop when syncing the pool when i have it all. all my documents, pics, etc.. all my data of seafile and smb resources... and yes i'm nervous now.

i'm running truenas in a
Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
with 48 Gb RAM DDR4-ECC
4 x 500GB seagate Pool-1
4x 4Tb WD RED Pool-2 (with error now)

so, is there any option to get my data back?

what do you need to help me?
tell me what should i do, commands to type, etc.. please i need help
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
What was the exact text of the panic? You haven't given enough info to help yet.
  • What's the actual topology of the pool with the error?
  • Which way did the NFS mount go? Did seafile NFS mount from TrueNAS, or did TrueNAS mount from seafile?
  • What's backing seafile? Did you use a zvol as the disk image for the VM?
 

suzdal

Cadet
Joined
Jan 31, 2013
Messages
7
What was the exact text of the panic? You haven't given enough info to help yet.
  • What's the actual topology of the pool with the error?
  • Which way did the NFS mount go? Did seafile NFS mount from TrueNAS, or did TrueNAS mount from seafile?
  • What's backing seafile? Did you use a zvol as the disk image for the VM?

there is no actual topology, i can't mount it.. but may you asking this.. it was a zpool1 i prefer space than redundacy.. i thought that if one fails @maza0n replace en one day.. so.. yes my fault
truenas provides the nfs sharing and debian mount it as /opt/seafile
nfs shared is a dataset
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
OK, please provide the output of zpool status -v <name of your Red pool>.
 

suzdal

Cadet
Joined
Jan 31, 2013
Messages
7
i unplug them because, otherwise, truenas goes in a reboot loop

now it is disconectend from truenas, if i import it again kbd panic comes again and again in the reboot loop at syncing boot

i tried to start in safe mode, but error at syncing comes.

is there a core report, or log ? where i can find it
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
The cores are all in /var/db/system/cores. Look also in /var/log/console.log. I'll need to know the exact text of the panic.
 

suzdal

Cadet
Joined
Jan 31, 2013
Messages
7
hi.

cores is empty, but in console.log i found this.

Mar 11 20:50:05 freenas Importing Pool-02
Mar 11 20:50:05 freenas spa.c:6138:spa_tryimport(): spa_tryimport: importing Pool-02
Mar 11 20:50:05 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADING
Mar 11 20:50:05 freenas spa.c:8187:spa_async_request(): spa=$import async request task=1
Mar 11 20:50:05 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/a5ddfe79-6bdf-11eb-87cd-6805ca007b16': vdev_geom_open: failed to open [error=2]
Mar 11 20:50:05 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/a5d07b77-6bdf-11eb-87cd-6805ca007b16': vdev_geom_open: failed to open [error=2]
Mar 11 20:50:05 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): vdev tree has 1 missing top-level vdevs.
Mar 11 20:50:05 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): current settings allow for maximum 0 missing top-level vdevs at this stage.
Mar 11 20:50:05 freenas spa_misc.c:396:spa_load_failed(): spa_load($import, config untrusted): FAILED: unable to open vdev tree [error=2]
Mar 11 20:50:05 freenas vdev.c:183:vdev_dbgmsg_print_tree(): vdev 0: root, guid: 14137448965918047143, path: N/A, can't open
Mar 11 20:50:05 freenas vdev.c:183:vdev_dbgmsg_print_tree(): vdev 0: raidz, guid: 17134527342197451087, path: N/A, can't open
Mar 11 20:50:05 freenas vdev.c:183:vdev_dbgmsg_print_tree(): vdev 0: disk, guid: 8464052132362910624, path: /dev/gptid/a579f19a-6bdf-11eb-87cd-6805ca007b16, healthy
Mar 11 20:50:05 freenas vdev.c:183:vdev_dbgmsg_print_tree(): vdev 1: disk, guid: 15547647455722688621, path: /dev/gptid/a586ee20-6bdf-11eb-87cd-6805ca007b16, healthy
Mar 11 20:50:05 freenas vdev.c:183:vdev_dbgmsg_print_tree(): vdev 2: disk, guid: 10692275990118619762, path: /dev/gptid/a5d07b77-6bdf-11eb-87cd-6805ca007b16, can't open
Mar 11 20:50:05 freenas vdev.c:183:vdev_dbgmsg_print_tree(): vdev 3: disk, guid: 5188781440804839517, path: /dev/gptid/a5ddfe79-6bdf-11eb-87cd-6805ca007b16, can't open
Mar 11 20:50:05 freenas spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): UNLOADING
Mar 11 20:50:05 freenas spa.c:5990:spa_import(): spa_import: importing Pool-02
Mar 11 20:50:05 freenas spa_misc.c:411:spa_load_note(): spa_load(Pool-02, config trusted): LOADING
Mar 11 20:50:05 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/a5ddfe79-6bdf-11eb-87cd-6805ca007b16': vdev_geom_open: failed to open [error=2]
Mar 11 20:50:05 freenas vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/a5d07b77-6bdf-11eb-87cd-6805ca007b16': vdev_geom_open: failed to open [error=2]
Mar 11 20:50:05 freenas Pools import completed
Mar 11 20:50:05 freenas Loading kernel modules:
Mar 11 20:50:05 freenas Setting hostname: freenas.local.
Mar 11 20:50:05 freenas Setting up harvesting: PURE_RDRAND,[UMA],[FS_ATIME],SWI,INTERRUPT,NET_NG,NET_ETHER,NET_TUN,MOUSE,KEYBOARD,ATTACH,CACHED
Mar 11 20:50:05 freenas Feeding entropy: .
Mar 11 20:50:05 freenas Starting interfaces...
Mar 11 20:50:05 freenas Generating configuration for interface_sync checkpoint
Mar 11 20:50:05 freenas ELF ldconfig path: /lib /usr/lib /usr/lib/compat /usr/local/lib /usr/local/lib/compat/pkg /usr/local/lib/compat/pkg /usr/local/lib/e2fsprogs /usr/local/lib/gcc7 /usr/local/lib/gcc9 /usr/local/lib/perl5/5.30/mach/CORE /usr/local/lib/samba4
Mar 11 20:50:05 freenas 32-bit compatibility ldconfig path:



i can't undestand that.. 2 disk fails at same time ?... thay are 1 month old...
i really don't understand.. Pool-1 are 4 seagate with more than 10 years... really is that the error?
it can be fixed ?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Unfortunately, RAIDZ1 Pool-02 has lost 2 drives, and your pool is lost. Yes, drives can fail early; this is a common occurrence known as infant mortality.

Your only hope at this point is ZFS recovery utilities like klennet.
 
Top