No such pool available on boot. IDs seems different on truenas DB

flmmartins

Dabbler
Joined
Sep 19, 2022
Messages
31
Hello everyone,

I have 2 pools named backup and default where default is my main one.

I upgraded from CORE => Scale. Everything was fine but, after couple weeks, I noticed after I reboot OR upgrade my default pool is not imported. I need to manually go to the UI, import pool and etc. The backup pool works normally.

Both my pools are encrypted.

The bad thing about having to re-import is that I need to set my application k3s and I do several small manual actions.

I looked at syslog but nothing comes up. Any clues on what's going on?

Thanks,
 

flmmartins

Dabbler
Joined
Sep 19, 2022
Messages
31
Happy 2024 everyone!

Still haven't resolved this issue. =(

Here is some more info:

default pool is the one with problem and is in sdb and sdc
boot pool is on sdd
The sda is the backup pool which works normally and I rarely use it.


I printed a lsblk before the boot import pool and syslog says:

Jan 1 16:23:03 tamrieltower systemd[1]: Starting zfs-import-cache.service - Import ZFS pools by cache file...
Jan 1 16:23:03 tamrieltower lsblk[930]: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
Jan 1 16:23:03 tamrieltower lsblk[930]: sda 8:0 0 931.5G 0 disk
Jan 1 16:23:03 tamrieltower lsblk[930]: ├─sda1 8:1 0 2G 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: └─sda2 8:2 0 929.5G 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: sdb 8:16 0 1.8T 0 disk
Jan 1 16:23:03 tamrieltower lsblk[930]: ├─sdb1 8:17 0 2G 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: └─sdb2 8:18 0 1.8T 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: sdc 8:32 0 1.8T 0 disk
Jan 1 16:23:03 tamrieltower lsblk[930]: ├─sdc1 8:33 0 2G 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: └─sdc2 8:34 0 1.8T 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: sdd 8:48 0 232.9G 0 disk
Jan 1 16:23:03 tamrieltower lsblk[930]: ├─sdd1 8:49 0 260M 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: ├─sdd2 8:50 0 216.6G 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: └─sdd3 8:51 0 16G 0 part
Jan 1 16:23:03 tamrieltower lsblk[930]: sr0 11:0 1 1024M 0 rom
Jan 1 16:23:03 tamrieltower zpool[931]: no pools available to import
Jan 1 16:23:03 tamrieltower systemd[1]: Finished zfs-import-cache.service - Import ZFS pools by cache file.
Jan 1 16:23:03 tamrieltower systemd[1]: Reached target zfs-import.target - ZFS pool import target.
Jan 1 16:23:03 tamrieltower systemd[1]: Starting zfs-mount.service - Mount ZFS filesystems...
Jan 1 16:23:03 tamrieltower systemd[1]: Starting zfs-volume-wait.service - Wait for ZFS Volume (zvol) links in /dev...
Jan 1 16:23:03 tamrieltower zfs[932]: failed to lock /etc/exports.d/zfs.exports.lock: Operation not permitted
Jan 1 16:23:03 tamrieltower zvol_wait[933]: No zvols found, nothing to do.
Jan 1 16:23:03 tamrieltower systemd[1]: Finished zfs-volume-wait.service - Wait for ZFS Volume (zvol) links in /dev.
Jan 1 16:23:03 tamrieltower systemd[1]: Reached target zfs-volumes.target - ZFS volumes are ready.
Jan 1 16:23:03 tamrieltower systemd[1]: Finished zfs-mount.service - Mount ZFS filesystems.
Jan 1 16:23:10 tamrieltower systemd[1]: Finished ix-wait-on-disks.service - Wait on Disk Enumeration.
Jan 1 16:23:10 tamrieltower systemd[1]: Starting middlewared.service - TrueNAS Middleware...
Jan 1 16:23:18 tamrieltower systemd[1]: Started middlewared.service - TrueNAS Middleware.

Everything seems to be fine so I went and check the middlewared logs and I found:

[2024/01/01 16:23:19] (DEBUG) PoolService.import_on_boot():344 - Creating '/data/zfs' (if it doesnt already exist)
[2024/01/01 16:23:19] (DEBUG) PoolService.import_on_boot():351 - Creating '/data/zfs/zpool.cache' (if it doesnt already exist)
[2024/01/01 16:23:19] (DEBUG) PoolService.import_on_boot_impl():223 - Importing 'default' with guid: '11161637274800723257'
[2024/01/01 16:23:19] (ERROR) PoolService.import_on_boot_impl():226 - Failed to import 'default' with guid: '11161637274800723257' with error: "cannot import '11161637274800723257': no such pool available\n"

Here are some status commands:

sqlite> select * from storage_disk;


{serial_lunid}S1DBNSAF817384Z_50025388a05f7415|sdd|scsi|2096|S1DBNSAF817384Z|250059350016||Auto|Always On|Disabled|1||||||||Samsung_SSD_840_EVO_250GB||SSD|||ATA|50025388a05f7415
{serial_lunid}Z52C595Q_5000c500e545abbd|sdc|scsi|2080|Z52C595Q|2000398934016||Auto|Always On|Disabled|1||||||||ST2000VN004-2E4164|5900|HDD||13812072171370630843|ATA|5000c500e545abbd
{serial_lunid}Z52C311Q_5000c500e544e732|sda|scsi|2048|Z52C311Q|2000398934016||Auto|Always On|Disabled|1||||||||ST2000VN004-2E4164|5900|HDD||10188667900967897476|ATA|5000c500e544e732
{serial_lunid}WD-WCC4JACP47KJ_50014ee20ab68248|sdb|scsi|2064|WD-WCC4JACP47KJ|1000204886016||Auto|Always On|Disabled|1||||||||WDC_WD10EZRX-00L4HB0|5400|HDD||4518782545706033556|ATA|50014ee20ab68248



sqlite> select * from storage_volume;
1|default|11161637274800723257
3|backup|9932495424708070591






$ zpool status -v


pool: backup
state: ONLINE
scan: scrub repaired 0B in 01:36:43 with 0 errors on Sun Dec 10 01:36:44 2023
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
sdb2 ONLINE 0 0 0
errors: No known data errors


pool: boot-pool
state: ONLINE
status: One or more features are enabled on the pool despite not being
requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
adding needed features to the relevant file in
/etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
scan: scrub repaired 0B in 00:00:19 with 0 errors on Sun Dec 31 03:45:20 2023


config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
sdd2 ONLINE 0 0 0



AFTER importing pool in the UI I always receive this error but it works afterwards:

Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py",
line 1900, in _execute_context self.dialect.do_execute( File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py",
line 736, in do_execute cursor.execute(statement, parameters) sqlite3.IntegrityError: UNIQUE constraint failed: storage_volume.vol_name The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 427, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 465, in __run_body rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/import_pool.py", line 186, in import_pool pool_id = await self.middleware.call('datastore.insert', 'storage.volume', {
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1399, in call return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1342, in _call return await methodobj(*prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/write.py", line 62, in insert result = await self.middleware.call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1399, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1353, in _call return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/connection.py", line 106, in execute_write result = self.connection.execute(sql, binds)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1365, in execute return self._exec_driver_sql(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py",
line 1669, in _exec_driver_sql ret = self._execute_context(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context self._handle_dbapi_exception(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception util.raise_( File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 211, in raise_ raise exception
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: storage_volume.vol_name [SQL: INSERT INTO storage_volume (vol_name, vol_guid) VALUES (?, ?)] [parameters: ('default', '13641551195106906369')] (Background on this error at: https://sqlalche.me/e/14/gkpj)

Looking at this it seems the truenas DB has a different ID from the one I am importing in the IU. How can I update this properly without braking things?
 
Last edited:

Bainnor

Dabbler
Joined
Nov 26, 2023
Messages
17
I'm no expert, but I can make a couple guesses. However, that's all they'd be, guesses, and probably wrong ones at that. You may get more interest from the experts if you include your hardware information as per the Forum Rules in the link at the top of the page, that way a better understanding of your hardware and how it's connected could reveal enough information for educated guesses without a lot of red herrings.

If there's still no solution, you could try a bug report with the appropriate logs using the link above the Forum Rules.
 

flmmartins

Dabbler
Joined
Sep 19, 2022
Messages
31
It took me months to figure out but here it goes for those who face this: I solved this by exporting pool. When I exported I deleted the configuration but I didn't destroy data. After export, I imported again. Somehow by doing this delete configuration, everything is working as it should.
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I could be wrong but I think I read somewhere on these forums shortly after Cobia came out that even though it is an option, you should not encrypt the entire pool, as it may cause upgrade/version migration/import issues, Just encrypt the data sets on the pool as needed.
 
Top