can't mount "unlocked by ancestor" dataset

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Here is a thought. Could it be that for some reason, which would be in my opinion very weird since this snapshot setup I had on my main system was running for years without a problem, that for some reason the boot-pool got somewhat mixed into the dataset ?

This is a df from the backup system:

Code:
boot-pool/grub                                               91435264    8448   91426816   1% /boot/grub
boot-pool/.system                                            91426944     128   91426816   1% /var/db/system
boot-pool/.system/cores                                       1048576     128    1048448   1% /var/db/system/cores
boot-pool/.system/samba4                                     91427072     256   91426816   1% /var/db/system/samba4
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941       91426944     128   91426816   1% /var/db/system/rrd-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941   91428864    2048   91426816   1% /var/db/system/configs-ae32c386e13840b2bf9c0083275e7941


However when I check the datasets, I find this:

Code:
admin@fremen[~]$ sudo zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
Backup                                                      3.66T  3.48T   307K  /mnt/Backup
Backup/.system                                              2.86M  3.48T   396K  /mnt/Backup/.system
Backup/.system/configs-ae32c386e13840b2bf9c0083275e7941      281K  3.48T   281K  /mnt/Backup/.system/configs-ae32c386e13840b2bf9c0083275e7941
Backup/.system/cores                                         281K  1024M   281K  /mnt/Backup/.system/cores
Backup/.system/ctdb_shared_vol                               281K  3.48T   281K  /mnt/Backup/.system/ctdb_shared_vol
Backup/.system/glusterd                                      281K  3.48T   281K  /mnt/Backup/.system/glusterd
Backup/.system/netdata-ae32c386e13840b2bf9c0083275e7941      281K  3.48T   281K  /mnt/Backup/.system/netdata-ae32c386e13840b2bf9c0083275e7941
Backup/.system/rrd-ae32c386e13840b2bf9c0083275e7941          281K  3.48T   281K  /mnt/Backup/.system/rrd-ae32c386e13840b2bf9c0083275e7941
Backup/.system/samba4                                        281K  3.48T   281K  /mnt/Backup/.system/samba4
Backup/.system/services                                      281K  3.48T   281K  /mnt/Backup/.system/services
Backup/.system/webui                                         281K  3.48T   281K  /mnt/Backup/.system/webui
Backup/barbora                                              5.69G  3.48T  5.68G  /mnt/Backup/barbora
Backup/synthing                                             40.7G  3.48T  40.7G  /mnt/Backup/synthing
Backup/valhalla                                             2.96T  3.48T  2.80T  /mnt/Backup/valhalla
Backup/vmbkp                                                 663G  3.48T   264G  /mnt/Backup/vmbkp
boot-pool                                                   4.86G  87.2G    96K  none
boot-pool/.system                                            166M  87.2G   112K  legacy
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941  1.99M  87.2G  1.99M  legacy
boot-pool/.system/cores                                       96K  1024M    96K  legacy
boot-pool/.system/ctdb_shared_vol                             96K  87.2G    96K  legacy
boot-pool/.system/glusterd                                    96K  87.2G    96K  legacy
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941   163M  87.2G   163M  legacy
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941        96K  87.2G    96K  legacy
boot-pool/.system/samba4                                     188K  87.2G   188K  legacy
boot-pool/.system/services                                    96K  87.2G    96K  legacy
boot-pool/.system/webui                                       96K  87.2G    96K  legacy
boot-pool/ROOT                                              4.67G  87.2G    96K  none
boot-pool/ROOT/23.10.0.1                                    2.34G  87.2G  2.34G  legacy
boot-pool/ROOT/23.10.0.1-1                                  2.33G  87.2G  2.33G  legacy
boot-pool/ROOT/Initial-Install                                 8K  87.2G  2.33G  /
boot-pool/grub                                              8.22M  87.2G  8.22M  legacy


EDIT: I may be wrong, but why would that even be in the Backup pool. Should that not just be "legacy" as mounting point ?

EDIT: like this:

Code:
root@padishah[~]# zfs list

NAME                                                     USED  AVAIL  REFER  MOUNTPOINT

Vault                                                    917G  19.7T   410K  /mnt/Vault

Vault/.system                                           74.5M  19.7T   546K  legacy

Vault/.system/configs-ae32c386e13840b2bf9c0083275e7941   725K  19.7T   725K  legacy

Vault/.system/cores                                      341K  1024M   341K  legacy

Vault/.system/ctdb_shared_vol                            341K  19.7T   341K  legacy

Vault/.system/glusterd                                   401K  19.7T   401K  legacy

Vault/.system/netdata-ae32c386e13840b2bf9c0083275e7941  70.1M  19.7T  70.1M  legacy

Vault/.system/rrd-ae32c386e13840b2bf9c0083275e7941       341K  19.7T   341K  legacy

Vault/.system/samba4                                    1.08M  19.7T  1.08M  legacy

Vault/.system/services                                   341K  19.7T   341K  legacy

Vault/.system/webui                                      341K  19.7T   341K  legacy

Vault/data                                               428G  19.7T   478K  /mnt/Vault/data

Vault/data/habkp                                         341K  19.7T   341K  /mnt/Vault/data/habkp

Vault/data/photography                                   220G  19.7T   220G  /mnt/Vault/data/photography

Vault/data/syncthing                                    96.3G  19.7T  96.3G  /mnt/Vault/data/syncthing

Vault/data/vmbkp                                         112G  19.7T   112G  /mnt/Vault/data/vmbkp

Vault/multimedia                                         488G  19.7T   488G  /mnt/Vault/multimedia
 
Joined
Oct 22, 2019
Messages
3,641
Something happened during this "dance" when you switched to your Backup pool.

I would set the System Dataset to reside on the boot-pool from the TrueNAS GUI menu.
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Something happened during this "dance" when you switched to your Backup pool.

I would set the System Dataset to reside on the boot-pool from the TrueNAS GUI menu.
Sorry for being now sounding really stupid. But where would I do that ? under System/Boot ?
 
Joined
Oct 22, 2019
Messages
3,641
I think in SCALE it's somewhere under Advanced or System. Somewhere...

Man, the Docs need a rehaul...
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
I hear you, I only recently moved to SCALE as I was on CORE for many years without a single hiccup. Let me check and get back to you.
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
may have found it but guess what:

1701267266752.png


Looks to me correct.
 
Joined
Oct 22, 2019
Messages
3,641
Looks to me correct.
How the heck did you get .system datasets/children on your Backup pool?


What are you active mounts?
Code:
zfs mount

Code:
mount


Check with both commands.
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Code:
admin@fremen[~]$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16244412k,nr_inodes=4061103,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3283064k,mode=755,inode64)
boot-pool/ROOT/23.10.0.1-1 on / type zfs (rw,relatime,xattr,noacl,casesensitive)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=102400k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15502)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,inode64)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
boot-pool/grub on /boot/grub type zfs (rw,relatime,xattr,noacl,casesensitive)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
boot-pool/.system on /var/db/system type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/cores on /var/db/system/cores type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/samba4 on /var/db/system/samba4 type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941 on /var/db/system/rrd-ae32c386e13840b2bf9c0083275e7941 type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941 on /var/db/system/configs-ae32c386e13840b2bf9c0083275e7941 type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/webui on /var/db/system/webui type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/services on /var/db/system/services type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/glusterd on /var/db/system/glusterd type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/ctdb_shared_vol on /var/db/system/ctdb_shared_vol type zfs (rw,relatime,xattr,noacl,casesensitive)
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941 on /var/db/system/netdata-ae32c386e13840b2bf9c0083275e7941 type zfs (rw,relatime,xattr,noacl,casesensitive)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
boot-pool/.system/cores on /var/lib/systemd/coredump type zfs (rw,relatime,xattr,noacl,casesensitive)
Backup on /mnt/Backup type zfs (rw,noatime,xattr,posixacl,casesensitive)


and

Code:
admin@fremen[~]$ sudo zfs mount
[sudo] password for admin:
boot-pool/ROOT/23.10.0.1-1      /
boot-pool/grub                  /boot/grub
boot-pool/.system               /var/db/system
boot-pool/.system/cores         /var/db/system/cores
boot-pool/.system/samba4        /var/db/system/samba4
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941  /var/db/system/rrd-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941  /var/db/system/configs-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/webui         /var/db/system/webui
boot-pool/.system/services      /var/db/system/services
boot-pool/.system/glusterd      /var/db/system/glusterd
boot-pool/.system/ctdb_shared_vol  /var/db/system/ctdb_shared_vol
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941  /var/db/system/netdata-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/cores         /var/lib/systemd/coredump
Backup                          /mnt/Backup
 
Joined
Oct 22, 2019
Messages
3,641
Create a checkpoint for Backup pool.
Code:
zpool checkpoint Backup


Do a dry-run for a destruction of the mysterious Backup/.system dataset.
Code:
zfs destroy -nvR Backup/.system


Then recursively destroy the mysterious Backup/.system dataset.
Code:
zfs destroy -vR Backup/.system


Then see if you can get life back to normal. (Export the pool and re-import it? Try to unlock your datasets?)


If all works out, you can delete the checkpoint.
Code:
zpool checkpoint -d Backup



EDIT: Please double and triple check any destructive commands. Please don't put "spaces" where there shouldn't be any.
 
Last edited:

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Ok, that seems to have gone well:

Code:
dmin@fremen[~]$ sudo zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
Backup                                                      3.66T  3.48T   307K  /mnt/Backup
Backup/barbora                                              5.69G  3.48T  5.68G  /mnt/Backup/barbora
Backup/synthing                                             40.7G  3.48T  40.7G  /mnt/Backup/synthing
Backup/valhalla                                             2.96T  3.48T  2.80T  /mnt/Backup/valhalla
Backup/vmbkp                                                 663G  3.48T   264G  /mnt/Backup/vmbkp
boot-pool                                                   4.86G  87.2G    96K  none
boot-pool/.system                                            166M  87.2G   112K  legacy
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941  1.99M  87.2G  1.99M  legacy
boot-pool/.system/cores                                       96K  1024M    96K  legacy
boot-pool/.system/ctdb_shared_vol                             96K  87.2G    96K  legacy
boot-pool/.system/glusterd                                    96K  87.2G    96K  legacy
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941   164M  87.2G   164M  legacy
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941        96K  87.2G    96K  legacy
boot-pool/.system/samba4                                     188K  87.2G   188K  legacy
boot-pool/.system/services                                    96K  87.2G    96K  legacy
boot-pool/.system/webui                                       96K  87.2G    96K  legacy
boot-pool/ROOT                                              4.67G  87.2G    96K  none
boot-pool/ROOT/23.10.0.1                                    2.34G  87.2G  2.34G  legacy
boot-pool/ROOT/23.10.0.1-1                                  2.33G  87.2G  2.33G  legacy
boot-pool/ROOT/Initial-Install                                 8K  87.2G  2.33G  /
boot-pool/grub                                              8.22M  87.2G  8.22M  legacy


question is, should I export and re-import the pool again ?
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Done, the .system files are still gone, but import still fails to mount:

Code:
Failed to mount dataset: [EFAULT] Failed to mount dataset: cannot mount 'Backup/valhalla': Permission denied


Code:
admin@fremen[~]$ sudo zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
Backup                                                      3.66T  3.48T   307K  /mnt/Backup
Backup/barbora                                              5.69G  3.48T  5.68G  /mnt/Backup/barbora
Backup/synthing                                             40.7G  3.48T  40.7G  /mnt/Backup/synthing
Backup/valhalla                                             2.96T  3.48T  2.80T  /mnt/Backup/valhalla
Backup/vmbkp                                                 663G  3.48T   264G  /mnt/Backup/vmbkp
boot-pool                                                   4.86G  87.2G    96K  none
boot-pool/.system                                            167M  87.2G   112K  legacy
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941  1.99M  87.2G  1.99M  legacy
boot-pool/.system/cores                                       96K  1024M    96K  legacy
boot-pool/.system/ctdb_shared_vol                             96K  87.2G    96K  legacy
boot-pool/.system/glusterd                                    96K  87.2G    96K  legacy
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941   164M  87.2G   164M  legacy
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941        96K  87.2G    96K  legacy
boot-pool/.system/samba4                                     188K  87.2G   188K  legacy
boot-pool/.system/services                                    96K  87.2G    96K  legacy
boot-pool/.system/webui                                       96K  87.2G    96K  legacy
boot-pool/ROOT                                              4.67G  87.2G    96K  none
boot-pool/ROOT/23.10.0.1                                    2.34G  87.2G  2.34G  legacy
boot-pool/ROOT/23.10.0.1-1                                  2.33G  87.2G  2.33G  legacy
boot-pool/ROOT/Initial-Install                                 8K  87.2G  2.33G  /
boot-pool/grub                                              8.22M  87.2G  8.22M  legacy


EDIT: I am still very suspicious why the valhalla dataset is inherited by the Backup pool. The replication tasks are identical, just different dataset. So I would expect this to have happened with this one too. But for some reason it didn't same for the .system dataset. I have not touched any of that because there is no reason for it, I only noticed it when we exported the dataset yesterday that it was moaning about glustered.

So here is my theory. Dataset valhalla has been encrypted and sent from my main system to the backup system. For whatever reason the pool inherited the encryption of the Backup pool. But technically it still is encrypted by the old system (at least I think). When it then tries to mount, it can't because of it.
 
Last edited:

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
had a quick look at zfs history. What I notice is that the dataset valhalla for whatever reason did not use the key that it should be encrypted with from the main system:

Also note, for whatever reason the .system folder gets also inserted into the pool

Code:
History for 'Backup':
2023-11-06.10:10:48 py-libzfs: zfs set keylocation=prompt Backup
2023-11-06.10:10:48 py-libzfs: zfs inherit  Backup
2023-11-06.10:10:54 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system
2023-11-06.10:10:58 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o quota=1G -o xattr=sa Backup/.system/cores
2023-11-06.10:10:58 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/samba4
2023-11-06.10:10:58 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/rrd-ae32c386e13840b2bf9c0083275e7941
2023-11-06.10:10:59 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/configs-ae32c386e13840b2bf9c0083275e7941
2023-11-06.10:10:59 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/webui
2023-11-06.10:10:59 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/services
2023-11-06.10:11:01 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/glusterd
2023-11-06.10:11:01 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/ctdb_shared_vol
2023-11-06.10:11:01 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa Backup/.system/netdata-ae32c386e13840b2bf9c0083275e7941
2023-11-06.10:14:41 py-libzfs: zfs load-key  -L file:///tmp/tmpd5icecox -n Backup
2023-11-06.10:48:05 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:05 zfs set readonly=on Backup/barbora
2023-11-06.10:48:07 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:08 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:10 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:11 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:13 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:14 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:15 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:17 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:18 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:20 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:21 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:23 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:24 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:26 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/barbora
2023-11-06.10:48:26 zfs set readonly=on Backup/barbora
2023-11-06.14:11:10 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:10 zfs set readonly=on Backup/valhalla
2023-11-06.14:11:11 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:13 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:15 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:18 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:20 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:24 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:26 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:27 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:29 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:30 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:32 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:34 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:36 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:37 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:39 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:41 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:42 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:44 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:45 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:47 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:49 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:50 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:52 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:54 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:55 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:57 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:59 zfs recv -s -F -x sharenfs -x sharesmb -x mountpoint Backup/valhalla
2023-11-06.14:11:59 zfs set readonly=on Backup/valhalla
2023-11-06.14:26:44 zfs set readonly=on Backup/synthing
2023-11-06.14:26:44 zfs set readonly=on Backup/synthing
2023-11-06.14:35:24 zpool import 10140409433243874837 -R /mnt -m -f -o cachefile=/data/zfs/zpool.cache
2023-11-06.14:35:31 py-libzfs: zfs load-key  -L file:///tmp/tmp0a0g0csa  Backup
2023-11-06.14:35:48 py-libzfs: zfs load-key  -L file:///tmp/tmpof_goxma -n Backup
2023-11-06.14:37:42 zfs recv -A Backup/vmbkp
2023-11-06.14:42:04 zfs set readonly=on Backup/vmbkp
2023-11-06.15:30:33 zfs set readonly=on Backup/vmbkp


EDIT: verified that on the main (freshly installed) system and this is normal behavior that .system gets inserted into the pool.
 
Last edited:

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
to make things even more bizar. The vmbkp dataset is the only one I can access at the moment when unlocked and mounts properly.

Code:
admin@fremen[/mnt/Backup/vmbkp]$ sudo du
14      ./dump/vzdump-lxc-105-2023_03_01-11_59_15.tmp/usr/bin
27      ./dump/vzdump-lxc-105-2023_03_01-11_59_15.tmp/usr
41      ./dump/vzdump-lxc-105-2023_03_01-11_59_15.tmp
181213062       ./dump
14      ./xo-vm-backups/.queue/clean-vm
27      ./xo-vm-backups/.queue
2217287 ./xo-vm-backups/7320d6a0-c85a-dffc-1972-947ee5596e1f/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/54f072ec-7083-480b-a3a2-f7378ad1c88f
2217301 ./xo-vm-backups/7320d6a0-c85a-dffc-1972-947ee5596e1f/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
2217314 ./xo-vm-backups/7320d6a0-c85a-dffc-1972-947ee5596e1f/vdis
2217382 ./xo-vm-backups/7320d6a0-c85a-dffc-1972-947ee5596e1f
8978990 ./xo-vm-backups/3c1d9a26-2613-c64d-5d23-d84fcd2aedee/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/e19a2876-d450-49bb-b131-0cf3db462872
35      ./xo-vm-backups/3c1d9a26-2613-c64d-5d23-d84fcd2aedee/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/34df95ea-231c-4367-990f-5e2289e3574a
8979038 ./xo-vm-backups/3c1d9a26-2613-c64d-5d23-d84fcd2aedee/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
8979052 ./xo-vm-backups/3c1d9a26-2613-c64d-5d23-d84fcd2aedee/vdis
8979125 ./xo-vm-backups/3c1d9a26-2613-c64d-5d23-d84fcd2aedee
2602233 ./xo-vm-backups/0be734ff-c1d7-da5f-15b1-c74e6caa32b7/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/baeeab96-93f7-4ac5-9270-ee0b56d248ab
2602246 ./xo-vm-backups/0be734ff-c1d7-da5f-15b1-c74e6caa32b7/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
2602260 ./xo-vm-backups/0be734ff-c1d7-da5f-15b1-c74e6caa32b7/vdis
4962515 ./xo-vm-backups/0be734ff-c1d7-da5f-15b1-c74e6caa32b7
3384279 ./xo-vm-backups/3a0deaad-3fed-e4c8-8691-f9b05ffec80e/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/390a25b5-b796-4c6b-8a2f-741a163a9cc6
3384293 ./xo-vm-backups/3a0deaad-3fed-e4c8-8691-f9b05ffec80e/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
3384306 ./xo-vm-backups/3a0deaad-3fed-e4c8-8691-f9b05ffec80e/vdis
3384374 ./xo-vm-backups/3a0deaad-3fed-e4c8-8691-f9b05ffec80e
6338307 ./xo-vm-backups/04973fcb-a521-9451-9c22-aa05b5eaabc3/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/7b4623d4-c426-4752-852d-f17a40ae3ce9
6338321 ./xo-vm-backups/04973fcb-a521-9451-9c22-aa05b5eaabc3/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
6338334 ./xo-vm-backups/04973fcb-a521-9451-9c22-aa05b5eaabc3/vdis
10435906        ./xo-vm-backups/04973fcb-a521-9451-9c22-aa05b5eaabc3
2966850 ./xo-vm-backups/960cee63-fb37-d229-2196-618f655194e2/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/7ab77b73-658f-44ac-9d10-efaf485976b8
2966863 ./xo-vm-backups/960cee63-fb37-d229-2196-618f655194e2/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
2966877 ./xo-vm-backups/960cee63-fb37-d229-2196-618f655194e2/vdis
2966944 ./xo-vm-backups/960cee63-fb37-d229-2196-618f655194e2
14757655        ./xo-vm-backups/6b5a6d8a-5f24-4b4c-4339-20523818fc32/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/73f40055-d83a-42b5-928a-d4515d2aa79c
14757669        ./xo-vm-backups/6b5a6d8a-5f24-4b4c-4339-20523818fc32/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
14757682        ./xo-vm-backups/6b5a6d8a-5f24-4b4c-4339-20523818fc32/vdis
14757750        ./xo-vm-backups/6b5a6d8a-5f24-4b4c-4339-20523818fc32
3133505 ./xo-vm-backups/95a3fcdf-1707-c336-5a1f-3e497d02a02b/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/71ed4279-b64d-4afb-bcfb-8d7212f77bc5
3133518 ./xo-vm-backups/95a3fcdf-1707-c336-5a1f-3e497d02a02b/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
3133532 ./xo-vm-backups/95a3fcdf-1707-c336-5a1f-3e497d02a02b/vdis
3133599 ./xo-vm-backups/95a3fcdf-1707-c336-5a1f-3e497d02a02b
2585605 ./xo-vm-backups/87041701-3f07-365c-d7cf-96a8c3b01262/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/2f35da9c-1896-49ef-a4f8-bedd0184106f
79      ./xo-vm-backups/87041701-3f07-365c-d7cf-96a8c3b01262/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/eab392ab-1c18-429d-b6ed-617e460da7b8
2585697 ./xo-vm-backups/87041701-3f07-365c-d7cf-96a8c3b01262/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
2585711 ./xo-vm-backups/87041701-3f07-365c-d7cf-96a8c3b01262/vdis
2585778 ./xo-vm-backups/87041701-3f07-365c-d7cf-96a8c3b01262
27824713        ./xo-vm-backups/32212c68-3155-7430-ec1b-90bf0133afc1/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/b292050c-6729-4de6-b126-9f8de9345f6c
27824727        ./xo-vm-backups/32212c68-3155-7430-ec1b-90bf0133afc1/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
27824740        ./xo-vm-backups/32212c68-3155-7430-ec1b-90bf0133afc1/vdis
27824808        ./xo-vm-backups/32212c68-3155-7430-ec1b-90bf0133afc1
14699258        ./xo-vm-backups/193d5b9b-b886-3871-2490-dc89c1623c00/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df/d9d05f07-d0f8-41b3-8276-fd7daee33cc3
14699272        ./xo-vm-backups/193d5b9b-b886-3871-2490-dc89c1623c00/vdis/01f5ddc5-786b-41d5-82d2-fd50d83787df
14699285        ./xo-vm-backups/193d5b9b-b886-3871-2490-dc89c1623c00/vdis
14699353        ./xo-vm-backups/193d5b9b-b886-3871-2490-dc89c1623c00
95947572        ./xo-vm-backups
14      ./images
14      ./snippets
277160674       .
admin@fremen[/mnt/Backup/vmbkp]
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Ok, another thing. I tried to create a new dataset, it creates the dataset, but as soon as it wants to mount it, it fails and gives a permission denied. Something is really goofy with the pool itself.
 
Joined
Oct 22, 2019
Messages
3,641
First, to confirm:
Code:
zfs get canmount,mountpoint,encryptionroot,keylocation,keystatus,keyformat Backup
zfs get canmount,mountpoint,encryptionroot,keylocation,keystatus,keyformat Backup/valhalla


You can try to do a "dry" unlock in the command-line:
Code:
zfs load-key -n Backup



EDIT: You are using the GUI for everything besides troubleshooting, yes? Don't start unlocking and mounting datasets with the command-line.
 
Last edited:

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
correct, the only things I have done CLI wise were collecting information and that is it and the things you provided me to test with. Also the testing with new dataset done over GUI. So all unlocking and the mounting is done via GUI.

Code:
root@fremen[/mnt]# zfs get canmount,mountpoint,encryptionroot,keylocation,keystatus,keyformat Backup
NAME    PROPERTY        VALUE        SOURCE
Backup  canmount        on           default
Backup  mountpoint      /mnt/Backup  default
Backup  encryptionroot  Backup       -
Backup  keylocation     prompt       local
Backup  keystatus       available    -
Backup  keyformat       hex          -
root@fremen[/mnt]# zfs get canmount,mountpoint,encryptionroot,keylocation,keystatus,keyformat Backup/valhalla
NAME             PROPERTY        VALUE                 SOURCE
Backup/valhalla  canmount        on                    default
Backup/valhalla  mountpoint      /mnt/Backup/valhalla  default
Backup/valhalla  encryptionroot  Backup                -
Backup/valhalla  keylocation     none                  default
Backup/valhalla  keystatus       available             -
Backup/valhalla  keyformat       hex                   -


Code:
root@fremen[/mnt]# zfs load-key -n Backup
Enter hex key for 'Backup':
1 / 1 key(s) successfully verified
root@fremen[/mnt]#
 
Joined
Oct 22, 2019
Messages
3,641
For whatever reason the pool inherited the encryption of the Backup pool. But technically it still is encrypted by the old system (at least I think).
It's not. According to your output, it is indeed managed by the "Backup" root dataset as its encryptionroot. Meaning that any lock/unlock commands are issued to the encryptionroot, which automatically apply to "valhalla".

At this point, while "Backup" is unlocked, I'd be interested to see if you can even mount Backup/valhalla via the command-line; and to see if you get any more informative error messages.
Code:
zfs mount Backup/valhalla
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
And this is what drives me nuts, because for whatever reason this does not make any sense to me.

Code:
root@fremen[/mnt]# zfs mount Backup/valhalla
cannot mount 'Backup/valhalla': Permission denied
 
Joined
Oct 22, 2019
Messages
3,641
Try again with the "-v" flag. "Permission denied" is vague.
 
Top