Pool Offline and vdev not assigned after each reboot on cobia

OneHungryPoboy

Dabbler
Joined
Oct 24, 2014
Messages
16
Upgraded from Bluefin 22.12.3.1 to 23.10.0.1 and found that my pool is offline and that my data vdev is not assigned and all 4 disks show as unassigned. Selecting import pool shows the missing pool, attempting the pool import fails with the attached error. Some snippets of the error here:
Error importing pool
sqlite3.IntegrityError) UNIQUE constraint failed: storage_volume.vol_name

Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(

However, after the import failure if I force refresh the GUI page it shows the pool as active and I am able to access the data shares. If I reboot the system I am back to the pool being offline. Everything works normally if I boot back in bluefin, but if I boot back to Cobia I'm back to an offline pool. Any ideas?
 

Attachments

  • Cobia pool import error.txt
    4.5 KB · Views: 917

BlackWolf42

Cadet
Joined
Nov 7, 2023
Messages
1
I just wanted to register and chime in here. I too had the same issue. It pretty much fixed itself after I installed the update from 23.10.0.0 to SCALE-23.10.0.1, rebooted then tried to import the set. I tried to re-import and error again. I restarted and looked at datasets and the sets were there and all locked. I unlocked them and it was good. I'm mystified.

MACHINES DON'T FIX THEMSELVES; words I've lived by whether it be a car or a PC. I was this close to just blowing it away and spending the 5 kilowatt-hours in power and 7 hours to restore it from a backup on a truenas scale 22.12.4.2 (bluefin) box.

I hope you were able to get your issue taken care of.
 

OneHungryPoboy

Dabbler
Joined
Oct 24, 2014
Messages
16
MACHINES DON'T FIX THEMSELVES
Certainly not in my case. I had seen a few other posts where booting into bluefin again then back to cobia fixed things, or importing the pool was a permanent fix, but not for me. Luckily this is an old NAS that I have been using the test Scale before I switch my main Core system over, so I am able to leave it in a broken state if the developer need additional information. I wrote up a bug jira last night.
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
Was the pool set to be encrypted before the upgrade? If so,try unencrypting it then rebooting.
 

GhostQuark

Cadet
Joined
Dec 8, 2023
Messages
1
Just wanted to register and say thanks so much for posting this solution! Two days ago I was still booting Freenas 11.3 off of a 16GB thumb drive (like you, I've had this pool since 9.something). Bought a new SSD to modernize and all the Core upgrades went smoothly. The last step of migrating to Scale landed me in the exact same boat with the import Pool error. After exporting (and de-selecting "Delete saved configurations..."), the subsequent import worked without error. Grateful.
 

accrocchio96

Dabbler
Joined
Jan 9, 2022
Messages
13
Dev looked at "midclt call pool.query | jq" output and the issue was my pool did not have a GUID. I have had this pool since a freenas core 9.X beta (9.10 beta I think). Fix was to export the pool from GUI and then import it back in.
Really thanks! You find the solution to our issue! Mark the thread as solved so the others can understand it fastly :)
 
Top