Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

My Working FreeNAS is throwing a kernel panic error

redstonemason

Junior Member
Joined
Aug 3, 2014
Messages
16
This has been a tremendous system over the years. The updates have never caused me an issue and the GUI interface has constantly improved.

I run 3 RED Western Digital Drives to create one large pool. But the system went offline and I had to attach a monitor and keyboard to it.

To my surprise, I was seeing a kernel panic error but the screen scolls away and a reboot cycle continues.

So I assumed non ECC memory or power supply to be the problem.

But after migrating the boot SSD and the 3 drives to another motherboard, memory and power supply, I get the same results.

So the only common thing left is the boot Boot Sandisk SSD.

I could supply all the specs of the old equipment or the new equipment but either way my root question is...

Are the Western Digitals still intact and can they be found if I re-install from the latest version of FreeNAS.

Thanks in advance.
 

NugentS

Neophyte Sage
Joined
Apr 16, 2020
Messages
662
Probably - depends on what is actually wrong.
Give it a try and see what happens. Rebuilding the OS is NOT a dataset fatal activity
 

redstonemason

Junior Member
Joined
Aug 3, 2014
Messages
16
I will set the current SanDisk SSD aside for now in case there is a configuration still left on it.

Then I will try a fresh install on a new SSD so as to not overwrite the current SSD whatsoever.

If I can get the 3 REDS up and running then I would like to build all over with the TrueNAS SCALE.
I like BSD because of my ancient history with Unix and the VAX 11/780 and AT&T Unix and then Berkely but LINUX from Debian with Native support for ZFS soundx like an awesome bleeding edge future path for me to follow.
 

redstonemason

Junior Member
Joined
Aug 3, 2014
Messages
16
So I have a new system up and running with a brand new SSD boot disk with TrueNAS Core and my 3 WD RED drives are now plugged in but my pool 'volume1' will not import.
Code:
  zpool import
 pool: volume1
     id: 12323590901269420005
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        volume1                                         ONLINE
          raidz1-0                                      ONLINE
            gptid/18fcb685-1a5e-11e4-96a8-74d4358d4506  ONLINE
            gptid/19c17c7e-1a5e-11e4-96a8-74d4358d4506  ONLINE
            gptid/1a7a2138-1a5e-11e4-96a8-74d4358d4506  ONLINE


So i have initiated:
Code:
zpool import -f -FX -N -T 44421727 12323590901269420005

but it runs forever and blocks Storage/Pools and/or Storage/Disks from reporting anything on the Web Panel.

The only good news is that I am not getting a kernel panic. The bad news is I still don't have the pool working.

What next? Do I send the drives to someone who knows TrueNas inside and out?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,467
So I have a new system up and running with a brand new SSD boot disk with TrueNAS Core and my 3 WD RED drives are now plugged in but my pool 'volume1' will not import.
Code:
  zpool import
pool: volume1
     id: 12323590901269420005
  state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:

        volume1                                         ONLINE
          raidz1-0                                      ONLINE
            gptid/18fcb685-1a5e-11e4-96a8-74d4358d4506  ONLINE
            gptid/19c17c7e-1a5e-11e4-96a8-74d4358d4506  ONLINE
            gptid/1a7a2138-1a5e-11e4-96a8-74d4358d4506  ONLINE


So i have initiated:
Code:
zpool import -f -FX -N -T 44421727 12323590901269420005

but it runs forever and blocks Storage/Pools and/or Storage/Disks from reporting anything on the Web Panel.

The only good news is that I am not getting a kernel panic. The bad news is I still don't have the pool working.

What next? Do I send the drives to someone who knows TrueNas inside and out?
Just curious -- how did you determine the transaction group specifier for option '-T' above?

Did simpler import attempts fail? Examples:
Code:
zpool import -f 12323590901269420005
zpool import -f -D 12323590901269420005
zpool import -f -D -F 12323590901269420005
 

redstonemason

Junior Member
Joined
Aug 3, 2014
Messages
16
I used the highest TXG number of the 32 Uberblock info records returned by:
Code:
zdb -ul 12323590901269420005


I did not try the simpler attempts.
 

redstonemason

Junior Member
Joined
Aug 3, 2014
Messages
16
And now that I have tried them after rebooting. (BTW, I could not kill the long running "zpool import -f -FX -N -T 44421727 12323590901269420005". Even killing the parent process did not kill it).

Results:

Code:
zpool import -f -D -F 12323590901269420005
cannot import '12323590901269420005': no such pool available
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,467
And now that I have tried them after rebooting. (BTW, I could not kill the long running "zpool import -f -FX -N -T 44421727 12323590901269420005". Even killing the parent process did not kill it).

Results:

Code:
zpool import -f -D -F 12323590901269420005
cannot import '12323590901269420005': no such pool available
Hmmm... Perhaps try using the pool name (volume1) instead?
 
Top