Upgrade to TrueNAS-13.0-U5.2 and now my Storage Pool is OFFLINE

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
Have you tried booting to a previous environment from the System -> Boot menu?

The only thought here is that somehow the previous environment was using a RAID-aware driver (possibly through a tunable?) and your updated installation has shifted to attempt to communicate with the raw disks, and it's not understanding the PERC-supplied header that it's getting from the raw disks.
Yes. I have tried them all with no luck.

1689868730625.png


I personally did not mess with any tunables. This is what is in there now.

1689868797265.png
 

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
Would re-installing 13.0-U5.1 manually vs reverting in System>Boot be worth a shot? I have also ordered a PERC H330 and am expecting it within the next few days.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Would re-installing 13.0-U5.1 manually vs reverting in System>Boot be worth a shot

Unlikely.

Do you know what you had for the RAID controller's setup previously? Did you set up each disk individually as a JBOD disk? Or did you use the RAID controller to create some sort of RAID virtual drive?
 

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
Unlikely.

Do you know what you had for the RAID controller's setup previously? Did you set up each disk individually as a JBOD disk? Or did you use the RAID controller to create some sort of RAID virtual drive?
As far as storage, each disk was setup individually as a RAID 0. That was the only way TrueNAS would recognize the drives, in my setup at least, since I was in RAID and not HBA mode. As for boot, I have 2 drives in a RAID 1.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, that is hopeful. I believe it means that the PERC controller has placed a pseudo-partition on the disk that holds the "virtual disk" you created. I haven't actually looked at what the modern PERC actually does, but it gives us options to explore. Some RAID controllers no longer do this and instead just start the RAID0 or RAID1 at the beginning of the disk, which is both easier and also a little more difficult to deal with.

If you go into the PERC configuration tool (via the BIOS/EFI, for example) do you still see all the virtual RAID0 volumes?

That was the only way TrueNAS would recognize the drives

And please understand that I'm trying to educate, not criticize, when I say that sentences starting out this way are usually an indication something bad's about to happen. Forcing TrueNAS to recognize something it doesn't seem to want to is usually an indication that the wrong thing is happening. Unwrapping the bad thing that happens ends up being a lot more work than getting the right thing to happen in the first place, if you get what I'm saying.
 

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
Well, that is hopeful. I believe it means that the PERC controller has placed a pseudo-partition on the disk that holds the "virtual disk" you created. I haven't actually looked at what the modern PERC actually does, but it gives us options to explore. Some RAID controllers no longer do this and instead just start the RAID0 or RAID1 at the beginning of the disk, which is both easier and also a little more difficult to deal with.

If you go into the PERC configuration tool (via the BIOS/EFI, for example) do you still see all the virtual RAID0 volumes?



And please understand that I'm trying to educate, not criticize, when I say that sentences starting out this way are usually an indication something bad's about to happen. Forcing TrueNAS to recognize something it doesn't seem to want to is usually an indication that the wrong thing is happening. Unwrapping the bad thing that happens ends up being a lot more work than getting the right thing to happen in the first place, if you get what I'm saying.
Yes.. I still see them.

1689958322062.png


I am not taking it negatively. I am soaking up all the knowledge you are putting out. As i said, i am no guru on TrueNAS. I do appreciate all the feedback and knowledge.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And if you're in TrueNAS and you go to the shell and type "camcontrol devlist" ...?
 

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
root@ushpst-san02[~]# camcontrol devlist <DELL PERC H730P Mini 4.30> at scbus0 target 0 lun 0 (pass0,da0) <DELL PERC H730P Mini 4.30> at scbus0 target 1 lun 0 (pass1,da1) <DELL PERC H730P Mini 4.30> at scbus0 target 2 lun 0 (pass2,da2) <DELL PERC H730P Mini 4.30> at scbus0 target 3 lun 0 (pass3,da3) <DELL PERC H730P Mini 4.30> at scbus0 target 4 lun 0 (pass4,da4) <DELL PERC H730P Mini 4.30> at scbus0 target 5 lun 0 (pass5,da5) <DELL PERC H730P Mini 4.30> at scbus0 target 6 lun 0 (pass6,da6) <DELL PERC H730P Mini 4.30> at scbus0 target 7 lun 0 (pass7,da7) <DELL PERC H730P Mini 4.30> at scbus0 target 8 lun 0 (pass8,da8) <DELL PERC H730P Mini 4.30> at scbus0 target 9 lun 0 (pass9,da9) <DELL PERC H730P Mini 4.30> at scbus0 target 10 lun 0 (pass10,da10) <DELL PERC H730P Mini 4.30> at scbus0 target 11 lun 0 (pass11,da11) <DELL PERC H730P Mini 4.30> at scbus0 target 12 lun 0 (pass12,da12) <DELL PERC H730P Mini 4.30> at scbus0 target 13 lun 0 (pass13,da13) <DELL PERC H730P Mini 4.30> at scbus0 target 14 lun 0 (pass14,da14) <DELL PERC H730P Mini 4.30> at scbus0 target 15 lun 0 (pass15,da15) <DELL PERC H730P Mini 4.30> at scbus0 target 16 lun 0 (pass16,da16) <AHCI SGPIO Enclosure 2.00 0001> at scbus4 target 0 lun 0 (ses0,pass17) root@ushpst-san02[~]#
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So you have all your virtual disks showing up. What happens if you try a ZFS pool import?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And what are you seeing for "gpart show da3" (all of the various daX devices except perhaps your boot should look very similar)?
 

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
root@ushpst-san02[~]# gpart show da1 gpart: No such geom: da1. root@ushpst-san02[~]# gpart show da2 gpart: No such geom: da2. root@ushpst-san02[~]# gpart show da3 gpart: No such geom: da3. root@ushpst-san02[~]# gpart show da4 gpart: No such geom: da4. root@ushpst-san02[~]# gpart show da5 gpart: No such geom: da5. root@ushpst-san02[~]# gpart show da6 gpart: No such geom: da6. root@ushpst-san02[~]#
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Mm, okay, that's concerning. Has anything been done to the RAID controller that might have changed its configuration?

What we'd hope to see is something along the lines of

Code:
# gpart show da6
=>         40  27344764848  da6  GPT  (13T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  27340570456    2  freebsd-zfs  (13T)


which shows a partition starting out at block #4194432, which is the beginning of the ZFS partition. If we had a partition, we could then inspect it (i.e. with "zdb -l /dev/da6" or something like that). But we don't. We could speculatively write a partition table out there and see what happens. If you didn't tinker with the default swap settings, then the offset for the ZFS partition is *probably* what is shown above. I think I once ran across a tool to identify ZFS partitions based on magic values, but I don't really remember. The other thing would be to try Klennet ZFS Recovery on it and see if it is able to find anything out there. If it can't find anything, we're unlikely to be able to recover anything.
 

Tong_Po

Dabbler
Joined
Jul 19, 2023
Messages
28
Mm, okay, that's concerning. Has anything been done to the RAID controller that might have changed its configuration?

What we'd hope to see is something along the lines of

Code:
# gpart show da6
=>         40  27344764848  da6  GPT  (13T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  27340570456    2  freebsd-zfs  (13T)


which shows a partition starting out at block #4194432, which is the beginning of the ZFS partition. If we had a partition, we could then inspect it (i.e. with "zdb -l /dev/da6" or something like that). But we don't. We could speculatively write a partition table out there and see what happens. If you didn't tinker with the default swap settings, then the offset for the ZFS partition is *probably* what is shown above. I think I once ran across a tool to identify ZFS partitions based on magic values, but I don't really remember. The other thing would be to try Klennet ZFS Recovery on it and see if it is able to find anything out there. If it can't find anything, we're unlikely to be able to recover anything.
What is the best approach for Klennet? Install windows on an unused drive and boot from that?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Top