can't mount "unlocked by ancestor" dataset

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Hi,

It seems that I won the lottery ticket this weekend, as my primary NAS system failed due to power outage and two disks in a raid-z2 died. Luckily I have a backup system which contains snapshots from the old system. Now I wanted to mount a particular dataset but it is unlocked by ancestor.

However, it mounts itself, but the folder (/mnt/Backup/data) is empty. This must be because the dataset itself was still encrypted by the main system (got that key and I can unlock other folders). When I click on the dataset itself, I get a message [EFAULT] Failed retreiving USER quotas for Backup/data.

Then I started to create a replication job, so I could replicate the data back to my freshly installed main NAS, but it will not allow me to do that either. regadless if the target has or has not an encrypted dataset I get an error.

So I am kind of stuck at the moment for ideas and I fear that my backup is becoming useless now. Wondering if anybody has some fresh ideas how to at least get access to the dataset on my backup system. I can transfer worst case the data manually over.
 
Joined
Oct 22, 2019
Messages
3,641
Unless you provide system info and your ZFS pool layout, we can only shoot in the wind.

Are you familiar with using the command-line and more notably zfs/zpool commands?
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
fair enough. So my backup system is a RaidzZ1 and has no other services running except receiving. So here are some of my zfs data:

admin@fremen[~]$ sudo zfs list
[sudo] password for admin:
NAME USED AVAIL REFER MOUNTPOINT
Backup 3.66T 3.48T 307K /mnt/Backup
Backup/.system 4.77M 3.48T 447K legacy
Backup/.system/configs-ae32c386e13840b2bf9c0083275e7941 281K 3.48T 281K legacy
Backup/.system/cores 281K 1024M 281K legacy
Backup/.system/ctdb_shared_vol 281K 3.48T 281K legacy
Backup/.system/glusterd 300K 3.48T 300K legacy
Backup/.system/netdata-ae32c386e13840b2bf9c0083275e7941 1.77M 3.48T 1.77M legacy
Backup/.system/rrd-ae32c386e13840b2bf9c0083275e7941 281K 3.48T 281K legacy
Backup/.system/samba4 639K 3.48T 377K legacy
Backup/.system/services 281K 3.48T 281K legacy
Backup/.system/webui 281K 3.48T 281K legacy
Backup/barbora 5.69G 3.48T 5.68G /mnt/Backup/barbora
Backup/synthing 40.7G 3.48T 40.7G /mnt/Backup/synthing
Backup/valhalla 2.96T 3.48T 2.80T /mnt/Backup/valhalla
Backup/vmbkp 663G 3.48T 264G /mnt/Backup/vmbkp
boot-pool 4.84G 87.2G 96K none
boot-pool/.system 157M 87.2G 112K legacy
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941 1.82M 87.2G 1.82M legacy
boot-pool/.system/cores 96K 1024M 96K legacy
boot-pool/.system/ctdb_shared_vol 96K 87.2G 96K legacy
boot-pool/.system/glusterd 96K 87.2G 96K legacy
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941 154M 87.2G 154M legacy
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941 96K 87.2G 96K legacy
boot-pool/.system/samba4 188K 87.2G 188K legacy
boot-pool/.system/services 96K 87.2G 96K legacy
boot-pool/.system/webui 96K 87.2G 96K legacy
boot-pool/ROOT 4.67G 87.2G 96K none
boot-pool/ROOT/23.10.0.1 2.34G 87.2G 2.34G legacy
boot-pool/ROOT/23.10.0.1-1 2.33G 87.2G 2.33G legacy
boot-pool/ROOT/Initial-Install 8K 87.2G 2.33G /
boot-pool/grub 8.22M 87.2G 8.22M legacy

zfs get mounted:

Backup/valhalla mounted no -
Backup/valhalla@auto-2023-10-30_00-00 mounted - -
Backup/valhalla@auto-2023-10-31_00-00 mounted - -
Backup/valhalla@auto-2023-11-01_00-00 mounted - -
Backup/valhalla@auto-2023-11-02_00-00 mounted - -
Backup/valhalla@auto-2023-11-03_00-00 mounted - -
Backup/valhalla@auto-2023-11-04_00-00 mounted - -
Backup/valhalla@auto-2023-11-05_00-00 mounted - -
Backup/valhalla@auto-2023-11-06_00-00 mounted - -
Backup/valhalla@auto-2023-11-07_00-00 mounted - -
Backup/valhalla@auto-2023-11-08_00-00 mounted - -
Backup/valhalla@auto-2023-11-09_00-00 mounted - -
Backup/valhalla@auto-2023-11-10_00-00 mounted - -
Backup/valhalla@auto-2023-11-11_00-00 mounted - -
Backup/valhalla@auto-2023-11-12_00-00 mounted - -
Backup/valhalla@auto-2023-11-13_00-00 mounted - -
Backup/valhalla@auto-2023-11-14_00-00 mounted - -
Backup/valhalla@auto-2023-11-15_00-00 mounted - -
Backup/valhalla@auto-2023-11-16_00-00 mounted - -
Backup/valhalla@auto-2023-11-17_00-00 mounted - -
Backup/valhalla@auto-2023-11-18_00-00 mounted - -
Backup/valhalla@auto-2023-11-19_00-00 mounted - -
Backup/valhalla@auto-2023-11-20_00-00 mounted - -
Backup/valhalla@auto-2023-11-21_00-00 mounted - -
Backup/valhalla@auto-2023-11-22_00-00 mounted - -
Backup/valhalla@auto-2023-11-23_00-00 mounted - -
Backup/valhalla@auto-2023-11-24_00-00 mounted - -
Backup/valhalla@auto-2023-11-25_00-00 mounted - -
Backup/valhalla@auto-2023-11-26_00-00 mounted - -

when trying to mount manually:

admin@fremen[~]$ sudo zfs mount Backup/valhalla
cannot mount 'Backup/valhalla': Permission denied
admin@fremen[~]$

zfs get keystatus

Backup/valhalla keystatus available -
Backup/valhalla@auto-2023-10-30_00-00 keystatus available -
Backup/valhalla@auto-2023-10-31_00-00 keystatus available -
Backup/valhalla@auto-2023-11-01_00-00 keystatus available -
Backup/valhalla@auto-2023-11-02_00-00 keystatus available -
Backup/valhalla@auto-2023-11-03_00-00 keystatus available -
Backup/valhalla@auto-2023-11-04_00-00 keystatus available -
Backup/valhalla@auto-2023-11-05_00-00 keystatus available -
Backup/valhalla@auto-2023-11-06_00-00 keystatus available -
Backup/valhalla@auto-2023-11-07_00-00 keystatus available -
Backup/valhalla@auto-2023-11-08_00-00 keystatus available -
Backup/valhalla@auto-2023-11-09_00-00 keystatus available -
Backup/valhalla@auto-2023-11-10_00-00 keystatus available -
Backup/valhalla@auto-2023-11-11_00-00 keystatus available -
Backup/valhalla@auto-2023-11-12_00-00 keystatus available -
Backup/valhalla@auto-2023-11-13_00-00 keystatus available -
Backup/valhalla@auto-2023-11-14_00-00 keystatus available -
Backup/valhalla@auto-2023-11-15_00-00 keystatus available -
Backup/valhalla@auto-2023-11-16_00-00 keystatus available -
Backup/valhalla@auto-2023-11-17_00-00 keystatus available -
Backup/valhalla@auto-2023-11-18_00-00 keystatus available -
Backup/valhalla@auto-2023-11-19_00-00 keystatus available -
Backup/valhalla@auto-2023-11-20_00-00 keystatus available -
Backup/valhalla@auto-2023-11-21_00-00 keystatus available -
Backup/valhalla@auto-2023-11-22_00-00 keystatus available -
Backup/valhalla@auto-2023-11-23_00-00 keystatus available -
Backup/valhalla@auto-2023-11-24_00-00 keystatus available -
Backup/valhalla@auto-2023-11-25_00-00 keystatus available -
Backup/valhalla@auto-2023-11-26_00-00 keystatus available -

let me know what you need more and I can spit it out.
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
oh and when I am in the folder running ls

admin@fremen[/mnt/Backup/valhalla]$ ls -ls
total 0

Done the same as root, since the folder is bound the root user.

admin@fremen[/mnt/Backup]$ ls -l
total 27
drwxr-xr-x 2 root root 2 Nov 6 14:35 valhalla
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
some additional thoughts. The backups have been replicated from my main system, some of the datasets have different permissions (non-root) on them. I suspect that is the reason that some of them (like
Backup/barbora 5.69G 3.48T 5.68G /mnt/Backup/barbora
Backup/synthing 40.7G 3.48T 40.7G /mnt/Backup/synthing
)

Don't mount properly because of the simple fact that the user is not on the backup system. the question is if I can somehow figure out with which UID (because I think just the username will not work) are on these datasets in order to properly mount them.
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
some screenshots of the pool layout

1701162924307.png


1701162862007.png
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Sorry if I am poluting the thread, just trying to get bit by bit all the information that may be helpful to solve this.

zfs get all Backup/valhalla
NAME PROPERTY VALUE SOURCE
Backup/valhalla type filesystem -
Backup/valhalla creation Mon Nov 6 10:54 2023 -
Backup/valhalla used 2.96T -
Backup/valhalla available 3.48T -
Backup/valhalla referenced 2.80T -
Backup/valhalla compressratio 1.08x -
Backup/valhalla mounted no -
Backup/valhalla quota none received
Backup/valhalla reservation none received
Backup/valhalla recordsize 128K default
Backup/valhalla mountpoint /mnt/Backup/valhalla default
Backup/valhalla sharenfs off inherited from Backup
Backup/valhalla checksum on default
Backup/valhalla compression lz4 inherited from Backup
Backup/valhalla atime off inherited from Backup
Backup/valhalla devices on default
Backup/valhalla exec on default
Backup/valhalla setuid on default
Backup/valhalla readonly on local
Backup/valhalla zoned off default
Backup/valhalla snapdir hidden default
Backup/valhalla aclmode restricted received
Backup/valhalla aclinherit discard inherited from Backup
Backup/valhalla createtxg 627 -
Backup/valhalla canmount on default
Backup/valhalla xattr sa received
Backup/valhalla copies 1 received
Backup/valhalla version 5 -
Backup/valhalla utf8only off -
Backup/valhalla normalization none -
Backup/valhalla casesensitivity insensitive -
Backup/valhalla vscan off default
Backup/valhalla nbmand off default
Backup/valhalla sharesmb off inherited from Backup
Backup/valhalla refquota none received
Backup/valhalla refreservation none received
Backup/valhalla guid 7556151558105083409 -
Backup/valhalla primarycache all default
Backup/valhalla secondarycache all default
Backup/valhalla usedbysnapshots 163G -
Backup/valhalla usedbydataset 2.80T -
Backup/valhalla usedbychildren 0B -
Backup/valhalla usedbyrefreservation 0B -
Backup/valhalla logbias latency default
Backup/valhalla objsetid 479 -
Backup/valhalla dedup off default
Backup/valhalla mlslabel none default
Backup/valhalla sync standard default
Backup/valhalla dnodesize legacy default
Backup/valhalla refcompressratio 1.06x -
Backup/valhalla written 0 -
Backup/valhalla logicalused 3.19T -
Backup/valhalla logicalreferenced 2.98T -
Backup/valhalla volmode default default
Backup/valhalla filesystem_limit none default
Backup/valhalla snapshot_limit none default
Backup/valhalla filesystem_count none default
Backup/valhalla snapshot_count none default
Backup/valhalla snapdev hidden default
Backup/valhalla acltype posix inherited from Backup
Backup/valhalla context none default
Backup/valhalla fscontext none default
Backup/valhalla defcontext none default
Backup/valhalla rootcontext none default
Backup/valhalla relatime on default
Backup/valhalla redundant_metadata all default
Backup/valhalla overlay on default
Backup/valhalla encryption aes-256-gcm -
Backup/valhalla keylocation none default
Backup/valhalla keyformat hex -
Backup/valhalla pbkdf2iters 0 default
Backup/valhalla encryptionroot Backup -
Backup/valhalla keystatus available -
Backup/valhalla special_small_blocks 0 default
Backup/valhalla snapshots_changed Sun Nov 26 1:25:26 2023 -
 
Joined
Oct 22, 2019
Messages
3,641
Which users exist on which systems don't affect how ZFS mounts datasets.

Can you use [code][/code] tags to enclose your output?


What does the following reveal:
Code:
zfs get used,mountpoint,encryptionroot Backup/valhalla


And also:
Code:
zfs mount | grep valhalla
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Hi,
here is the output:

Code:
sudo zfs get used,mountpoint,encryptionroot Backup/valhalla
NAME             PROPERTY        VALUE                 SOURCE
Backup/valhalla  used            2.96T                 -
Backup/valhalla  mountpoint      /mnt/Backup/valhalla  default
Backup/valhalla  encryptionroot  Backup                -


The zfs mount | grep valhalla does not spit anything out.
 
Joined
Oct 22, 2019
Messages
3,641
So basically, "Backup" is the encryptionroot of "Backup/valhalla", meaning that you cannot independently lock/unlock valhalla by itself.

I'd wager that you have an interfering (empty) folder in the way of the mountpoint's path. Perhaps as a relic from before?


Just to confirm:
Code:
du -hs --apparent-size /mnt/Backup/valhalla


If you feel comfortable, you can remove the empty folder.
Code:
rmdir /mnt/Backup/valhalla


Now try to mount the dataset, either by exporting and re-importing the pool, or using the TrueNAS CLI tool.
 
Last edited:

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
Hi,

du -hs --apparent-size /mnt/Backup/valhalla
2 /mnt/Backup/valhalla

And you may be right that it got for some reason a relic from before but nothing I have done myself though. Hence I found it a bit weird that the folder was already created.

To make it even more weirder, I cannot remove the folder:

rmdir /mnt/Backup/valhalla
rmdir: failed to remove '/mnt/Backup/valhalla': Operation not permitted

This was done with both the admin account and to be sure also from the root account itself.

I was also already thinking to change the mounting point itself to a different folder, if possible.

admin@fremen[/mnt/Backup]$ ls -l
total 27
drwxr-xr-x 2 root root 2 Nov 6 14:35 valhalla

So pretty funky if you ask me :)
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
So it's an empty folder, and it's not a mount, yet not even the root user can remove this... empty folder?

Code:
stat /mnt/Backup/valhalla


EDIT: You're not cd'd into the directory when you try to rmdir, are you?
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
that is correct:

Code:
root@fremen[~]# stat /mnt/Backup/valhalla
  File: /mnt/Backup/valhalla
  Size: 2               Blocks: 27         IO Block: 512    directory
Device: 0,41    Inode: 2           Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2023-11-06 14:35:33.147998152 +0100
Modify: 2023-11-06 14:35:33.147998152 +0100
Change: 2023-11-27 14:35:11.157374438 +0100
 Birth: 2023-11-06 14:35:33.147998152 +0100
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
So it's an empty folder, and it's not a mount, yet not even the root user can remove this... empty folder?

Code:
stat /mnt/Backup/valhalla


EDIT: You're not cd'd into the directory when you try to rmdir, are you?
no, I know that will not work :)

Code:
root@fremen[~]# rmdir /mnt/Backup/valhalla
rmdir: failed to remove '/mnt/Backup/valhalla': Operation not permitted
root@fremen[~]# ls
samba  tdb
root@fremen[~]#


Just to show you that I was in a diff folder.
 
Joined
Oct 22, 2019
Messages
3,641
Are you okay with exporting your pool, checking for ghost folders, and then re-importing the pool?
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
I am, but the problem is, it won't:

[EFAULT] cannot unmount '/mnt/Backup': pool or dataset is busy

Code:
The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 464, in __run_body
    rv = await self.method(*([self] + args))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/export.py", line 172, in export
    await self.middleware.call('zfs.pool.export', pool['name'])
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1349, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1355, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] cannot unmount '/mnt/Backup': pool or dataset is busy
 


EDIT: I did check if there was nothing mounted at all (there are no services running on it like SMB/NFS/iSCSI.

Code:
root@fremen[~]# zfs mount
boot-pool/ROOT/23.10.0.1-1      /
boot-pool/grub                  /boot/grub
Backup                          /mnt/Backup
boot-pool/.system               /var/db/system
boot-pool/.system/cores         /var/db/system/cores
boot-pool/.system/samba4        /var/db/system/samba4
boot-pool/.system/rrd-ae32c386e13840b2bf9c0083275e7941  /var/db/system/rrd-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941  /var/db/system/configs-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/webui         /var/db/system/webui
boot-pool/.system/services      /var/db/system/services
boot-pool/.system/glusterd      /var/db/system/glusterd
boot-pool/.system/ctdb_shared_vol  /var/db/system/ctdb_shared_vol
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941  /var/db/system/netdata-ae32c386e13840b2bf9c0083275e7941
boot-pool/.system/cores         /var/lib/systemd/coredump
 
Joined
Oct 22, 2019
Messages
3,641
At this point, can you even reboot the system?
 

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
rebooted, so I was able to export now, when I imported the pool again the following message popped up

Code:
Error: [EFAULT] /var/db/system/glusterd does not exist or is not mounted

This system is not in a cluster so not sure why this even showed up.

then when I unlock the pool I got this:

1701187010359.png


makes sense, since the other 3 are encrypted by the key from my main system (but that is also where my confusion gets, since the dataset valhalla is from that same system and used the same settings to snapshot this to the backup system.)

But when I continue I get this:

1701187126911.png


Code:
Failed to mount dataset: [EFAULT] Failed to mount dataset: cannot mount '/mnt/Backup/.system': failed to create mountpoint: Operation not permitted


EDIT: I have edited the JSON for the key, to include the syncthing, barbora and vmbkp datasets, so they unlock all at the same time correctly, but fail miserably when mounting.

1701193099353.png



1701193147370.png
 
Last edited:

valhalla

Explorer
Joined
Nov 27, 2023
Messages
51
I tried to get my head around it, but I fear the pool itself may have a problem, which is causing all these weirdness. And it would be good to really understand what could cause this to avoid it happen again in the future.
 
Joined
Oct 22, 2019
Messages
3,641
Where is your System Dataset? It should under no circumstance be nested inside a "lockable" parent.
 
Top