Datasets disappeared after changing the pool's mount point

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
I am trying to migrate from OpenMediaVault (where I was already using a zfs pool for storage) to TrueNAS Core and I'm facing the issue that I no longer can access several terrabytes of data after modifying the pools mountpoint in TrueNAS. By "cannot access" I mean that they are no longer visible. This is a rather complex issue, with a number of factors that may play a role, so I'll try to provide some context. (I am new to BSD/TrueNAS, so please bear with me.)

Once I had installed TrueNAS on a new SSD, I went ahead and imported my existing pool (mypool) via the TrueNAS UI. It seemed to go smoothly even though I did not export the pool on OMV before shutting it down. All datasets were accessible as expected.

But when I tried to recreate my SMB shares, it would complain that "the path must reside within a volume mount point":

92e5eeffd1be4f1dfb252a00a4838be71bfdad4b.png


I learned that the reason for this was a problem which has previously been described in this thread, namely that the name of the mount point did not include the name of the pool. As you can infer from the screenshot above, the mount point was /mnt/zfs/ even though the pool is called mypool. (On OMV, the pool was mounted at /zfs.)

In order to fix this, I did the following

Code:
# zfs unmount mypool
# zfs get mountpoint mypool
NAME    PROPERTY    VALUE     SOURCE
mypool  mountpoint  /mnt/zfs  local
# zfs set mountpoint /mnt/mypool mypool
# zfs get mountpoint mypool
NAME    PROPERTY    VALUE            SOURCE
mypool  mountpoint  /mnt/mnt/mypool  local
# zfs set mountpoint=mypool mypool
cannot set property for 'mypool': 'mountpoint' must be an absolute path, 'none', or 'legacy'
# zfs set mountpoint=/mypool mypool
# zfs get mountpoint mypool
NAME    PROPERTY    VALUE        SOURCE
mypool  mountpoint  /mnt/mypool  local


I then exported and re-imported the pool via the TrueNAS UI.

I was now able to create SMB shares, but a number of datasets no longer showed up. The data still seems to be there, as they are taking up several TB of space, but I can't see them.

Based on this post, my hunch is that the problem may be related to an issue I face on OMV a couple of years ago where data somehow got written into /zfs/NAS while the pool was not mounted (or something like that, I don't remember the details). At the time, I somehow solved the issue by creating /zfs/NASx as a temporary directory so that I was able to move the data (that had been written into /zfs/NAS and which would become invisible once the pool was mounted "on top of it") into the pool. - Sorry, I don't remember more, but if this seems relevant, feel free to ask specific questions, and I may manage to remember.

Hoping that I might regain access to the missing datasets, I tried changing the mountpoint back to what it was, but that doesn't seem to be possible:

Code:
# zfs set mountpoint=/zfs mypool
cannot set property for 'mypool': child dataset with inherited mountpoint is used in a non-global zone


Is there any way of fixing this?
 

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
One more thing that might be relevant:

After changing the mount point, I created an SMB share and was then asked whether I wanted to set ACL permissions and I said yes. I then trued to apply an ACL preset, but it got stuck on "please wait".

d4211257741f745031ea06a554fbb186aa8aef76.png


Unsure whether any template had been applied or which permissions it included, I tried to access the ACL again via the samba tab. This time I’m not going near the presets. I just want to give access to myself. But it gives me this:

1690971090249.png


In the meantime, I have learned that this is a known incompatibility which can be fixed (at the price of losing all existing ACLs), but that is not the issue in the thread. I'm just mentioning it in case it is somehow relevant.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It sounds like there may be two unrelated issues here:
  1. Some datasets are not mounted properly.
    1. Have you checked that the mountpoints all make sense in the child datasets? With TrueNAS, you would typically want them all cleared, i.e. inheriting from their parent.
  2. Confusion between directories and datasets
    1. Once the first point has been addressed, and if you find that some datasets are empty/emptier than they should be, you'll need to systematically unmount the datasets, rename the directories, mount the datasets, and copy the missing data over.
 

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
After I unmounted and then remounted the pool, at least some of the missing datasets (or folders?) showed up again. This is good, news, of course, but I have no idea what's going on. If zfs unmount mypool and then zfs mount mypool can make stuff appear, I suspect that the same can make stuff disappear again, which is worrying, so I would still like to figure out what is causing this and fix it. In the end, changing the mount point may not have been the cause for the disappearance...

Have you checked that the mountpoints all make sense in the child datasets?

How do I know whether they "make sense"?

Here they are:


Code:
zfs list
NAME                                                                                 USED  AVAIL     REFER  MOUNTPOINT
boot-pool                                                                           2.29G  90.2G       24K  none
boot-pool/.system                                                                    858M  90.2G      852M  legacy
boot-pool/.system/configs-bd26c8fd36fd4618a75dffa52c04f828                           129K  90.2G      129K  legacy
boot-pool/.system/cores                                                               24K  1024M       24K  legacy
boot-pool/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828                              5.92M  90.2G     5.92M  legacy
boot-pool/.system/samba4                                                              75K  90.2G       75K  legacy
boot-pool/.system/services                                                            24K  90.2G       24K  legacy
boot-pool/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828                            184K  90.2G      184K  legacy
boot-pool/.system/webui                                                               24K  90.2G       24K  legacy
boot-pool/ROOT                                                                      1.45G  90.2G       24K  none
boot-pool/ROOT/Initial-Install                                                         1K  90.2G     1.29G  legacy
boot-pool/ROOT/default                                                              1.45G  90.2G     1.44G  legacy
mypool                                                                              7.06T  63.4G     4.84T  /mnt/mypool
mypool/NAS                                                                          51.1G  63.4G     51.1G  /mnt/mypool/NAS
mypool/TM                                                                           1.79T  63.4G     1.79T  /mnt/mypool/TM
mypool/TM/christoph                                                                  128K  63.4G      128K  /mnt/mypool/TM/christoph
mypool/containers                                                                    882M  63.4G       96K  /mnt/mypool/containers
mypool/containers/1a26d2434d4b926f1e8bf2437d89dbb4c22082cee1bdce7420f0c2230388a164    92K  63.4G      266M  legacy
mypool/containers/2446710cbd06512d9598269ef0eb4001e70f1ce70af47e3330e2c925d782288d   116K  63.4G     84.5M  legacy
mypool/containers/362154b593a4c60ba49068742c95e0bbbb6d255b501a600facfb24ef4565dcd0    72K  63.4G      266M  legacy
mypool/containers/37ebdb880f27bbdd960fde707136dff1b97db3c0346aefd2b237eb0f4308b22a    88K  63.4G      609M  legacy
mypool/containers/38324eccf5ac61acbe17dd1febff80b177fdd6ca872ed40198582730d4f28086  46.0M  63.4G      189M  legacy
mypool/containers/493a9cdda080ed04a9c341df8373eb57509865978d48e5a41a8e9685204aab24   526M  63.4G      609M  legacy
mypool/containers/5d98959c93a02604fc131d74c1a13412a6ff9bb1999dc6d7b05cf32704b31a56  52.1M  63.4G     98.3M  legacy
mypool/containers/77f4360ae2b52ade4b1272633153fc8e71f1c7238c5edcc7f4dc14d52bef1bf1  31.0M  63.4G      266M  legacy
mypool/containers/7e9df073c9d07bcbddcb6cca3580fdaf822142874b2cada63afe00d6012c5272  84.4M  63.4G     84.4M  legacy
mypool/containers/7fdcb7bcd07a8749b81616e1921c3a557e93bc286cbc88618f238fcff02895e4    72K  63.4G      368K  legacy
mypool/containers/83ce49406cc147ddd561c5a093e71a082122cacf4e4ab3699a1695c76c6c37a6  46.1M  63.4G      235M  legacy
mypool/containers/8c004456aeb58b75f792fa091b194c20d6ed4f0d95dd25b0150d71c5c9ab601b   360K  63.4G      360K  legacy
mypool/containers/aef268d688e5820d156ea20e119ad5995ad5477efbf7b887a50d4cc147ddd934    76K  63.4G      189M  legacy
mypool/containers/c0e5c047951147a952903acd94d5044a912a8390be3f3f26d3d066b275ae7234  3.98M  63.4G     3.98M  legacy
mypool/containers/d2614e6e67befb4b8f863dedf0366ceeb25fe37da5d055a82305afa416a56811  46.0M  63.4G     46.3M  legacy
mypool/containers/d7b5ece069de2921ccc12b602d0b4c26a178d4a7874d5daca68f3875ef8c6e9d   120K  63.4G      609M  legacy
mypool/containers/e43690f1d38ffcd71e73b2daaa391de95137928f2fab990d3dc930bdd4149e20  44.9M  63.4G      143M  legacy
mypool/containerstorage                                                               96K  63.4G       96K  /mnt/var/db/containers
mypool/hikvision                                                                     212G  63.4G      212G  /mnt/mypool/hikvision
mypool/hikvision2                                                                    183G  63.4G      183G  /mnt/mypool/hikvision2
mypool/iocage                                                                       2.11G  63.4G     7.93M  /mnt/mypool/iocage
mypool/iocage/download                                                               256M  63.4G       96K  /mnt/mypool/iocage/download
mypool/iocage/download/13.2-RELEASE                                                  256M  63.4G      256M  /mnt/mypool/iocage/download/13.2-RELEASE
mypool/iocage/images                                                                  96K  63.4G       96K  /mnt/mypool/iocage/images
mypool/iocage/jails                                                                 1.20G  63.4G       96K  /mnt/mypool/iocage/jails
mypool/iocage/jails/openHAB3                                                         813M  63.4G      100K  /mnt/mypool/iocage/jails/openHAB3
mypool/iocage/jails/openHAB3/root                                                    813M  63.4G     1.44G  /mnt/mypool/iocage/jails/openHAB3/root
mypool/iocage/jails/podman_pkg                                                       417M  63.4G      108K  /mnt/mypool/iocage/jails/podman_pkg
mypool/iocage/jails/podman_pkg/data                                                   96K  63.4G       96K  /mnt/mypool/iocage/jails/podman_pkg/data
mypool/iocage/jails/podman_pkg/root                                                  417M  63.4G     1.06G  /mnt/mypool/iocage/jails/podman_pkg/root
mypool/iocage/log                                                                    108K  63.4G      108K  /mnt/mypool/iocage/log
mypool/iocage/releases                                                               670M  63.4G       96K  /mnt/mypool/iocage/releases
mypool/iocage/releases/13.2-RELEASE                                                  670M  63.4G       96K  /mnt/mypool/iocage/releases/13.2-RELEASE
mypool/iocage/releases/13.2-RELEASE/root                                             670M  63.4G      668M  /mnt/mypool/iocage/releases/13.2-RELEASE/root
mypool/iocage/templates                                                               96K  63.4G       96K  /mnt/mypool/iocage/templates


There is a lot that doesn't make sense to me, for example that the above list doesn't quite match what I see in the UI:

1690976217691.png


Note, for example, that /mnt/mypool/timemachine is not on the zfs list. Neither is /mnt/mypool/NASx.

Another thing I just noticed is that now that stuff has reappeared, my jails seem to have disappeared. This is what I get when I enter the Jail tab in th UI:

1690976484732.png


So, it is asking me to set a mountpoint on mypool/iocage, but as we can see in the zfs list above, that dataset has a mountpoint...
With TrueNAS, you would typically want them all cleared, i.e. inheriting from their parent.
How do I know whether a mountpoint is "cleared" (and how do I clear it)?
 

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
While I try to figure this out, could someone clarify whether any data is at risk if I add data to the pool or move data around. I'm assuming that the data I'm not seeing is safe (just invisible, somehow), but want to double-check. The thing is that in the olden days, when files on a harddrive were missing, they could often still be recovered if they were physically still there, i.e. not overwritten by other data. I'm not sure if the analogy applies to those missing files in my pool.

Here is some more information about my situation:

Code:
zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool  95.5G  2.29G  93.2G        -         -     0%     2%  1.00x    ONLINE  -
mypool     7.25T  7.05T   203G        -         -    55%    97%  1.00x    ONLINE  /mnt


Note that the Pool is about 8TB in size with 7.05T used.

But df sees the pool as being only 4.9T:

Code:
df -h
Filesystem                                                                            Size    Used   Avail Capacity  Mounted on
boot-pool/ROOT/default                                                                 92G    1.4G     90G     2%    /
devfs                                                                                 1.0K    1.0K      0B   100%    /dev
tmpfs                                                                                  32M     10M     22M    32%    /etc
tmpfs                                                                                 4.0M    8.0K    4.0M     0%    /mnt
tmpfs                                                                                 7.9G    134M    7.8G     2%    /var
fdescfs                                                                               1.0K    1.0K      0B   100%    /dev/fd
boot-pool/.system                                                                      91G    852M     90G     1%    /var/db/system
boot-pool/.system/cores                                                               1.0G     24K    1.0G     0%    /var/db/system/cores
boot-pool/.system/samba4                                                               90G     72K     90G     0%    /var/db/system/samba4
boot-pool/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828                              90G    193K     90G     0%    /var/db/system/syslog-bd26c8fd36fd4618a75dffa52c04f828
boot-pool/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828                                 90G    7.2M     90G     0%    /var/db/system/rrd-bd26c8fd36fd4618a75dffa52c04f828
boot-pool/.system/configs-bd26c8fd36fd4618a75dffa52c04f828                             90G    129K     90G     0%    /var/db/system/configs-bd26c8fd36fd4618a75dffa52c04f828
boot-pool/.system/webui                                                                90G     24K     90G     0%    /var/db/system/webui
boot-pool/.system/services                                                             90G     24K     90G     0%    /var/db/system/services
fdescfs                                                                               1.0K    1.0K      0B   100%    /var/run/samba/fd
fdescfs                                                                               1.0K    1.0K      0B   100%    /dev/fd
mypool                                                                                4.9T    4.8T     40G    99%    /mnt/mypool
mypool/TM/christoph                                                                    40G    128K     40G     0%    /mnt/mypool/TM/christoph
devfs                                                                                 1.0K    1.0K      0B   100%    /var/db/containers/storage/zfs/graph/51b767a497a33ae66b9ef26e6115a010969781e577c265f5e786b1acec3ed6ae/dev
/var/db/containers/storage/volumes/portainer_data/_data                               7.9G    134M    7.8G     2%    /var/db/containers/storage/zfs/graph/51b767a497a33ae66b9ef26e6115a010969781e577c265f5e786b1acec3ed6ae/data
fdescfs                                                                               1.0K    1.0K      0B   100%    /var/db/containers/storage/zfs/graph/51b767a497a33ae66b9ef26e6115a010969781e577c265f5e786b1acec3ed6ae/dev/fd
mypool/containers/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09     40G    610M     40G     1%    /var/db/containers/storage/zfs/graph/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09
devfs                                                                                 1.0K    1.0K      0B   100%    /var/db/containers/storage/zfs/graph/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09/dev
fdescfs                                                                               1.0K    1.0K      0B   100%    /var/db/containers/storage/zfs/graph/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09/dev/fd
/var/db/containers/storage/volumes/omada-data/_data                                   7.9G    134M    7.8G     2%    /var/db/containers/storage/zfs/graph/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09/opt/tplink/EAPController/data
/var/db/containers/storage/volumes/omada-logs/_data                                   7.9G    134M    7.8G     2%    /var/db/containers/storage/zfs/graph/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09/opt/tplink/EAPController/logs
mypool/VM                                                                              42G    1.8G     40G     4%    /mnt/mypool/VM


Here is what zfs list -r says:

Code:
zfs list -r
NAME                                                                                 USED  AVAIL     REFER  MOUNTPOINT
boot-pool                                                                           2.29G  90.2G       24K  none
boot-pool/.system                                                                    860M  90.2G      852M  legacy
boot-pool/.system/configs-bd26c8fd36fd4618a75dffa52c04f828                           129K  90.2G      129K  legacy
boot-pool/.system/cores                                                               24K  1024M       24K  legacy
boot-pool/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828                              7.22M  90.2G     7.22M  legacy
boot-pool/.system/samba4                                                              72K  90.2G       72K  legacy
boot-pool/.system/services                                                            24K  90.2G       24K  legacy
boot-pool/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828                            193K  90.2G      193K  legacy
boot-pool/.system/webui                                                               24K  90.2G       24K  legacy
boot-pool/ROOT                                                                      1.45G  90.2G       24K  none
boot-pool/ROOT/Initial-Install                                                         1K  90.2G     1.29G  legacy
boot-pool/ROOT/default                                                              1.45G  90.2G     1.44G  legacy
mypool                                                                              7.09T  39.8G     4.82T  /mnt/mypool
mypool/NAS                                                                          51.1G  39.8G     51.1G  /mnt/mypool/NAS
mypool/TM                                                                           1.79T  39.8G     1.79T  /mnt/mypool/TM
mypool/TM/christoph                                                                  128K  39.8G      128K  /mnt/mypool/TM/christoph
mypool/VM                                                                           42.5G  39.8G     1.83G  /mnt/mypool/VM
mypool/VM/server-bx12ye                                                             40.6G  74.5G     5.87G  -
mypool/containers                                                                   1.38G  39.8G       96K  /mnt/mypool/containers
mypool/containers/1a26d2434d4b926f1e8bf2437d89dbb4c22082cee1bdce7420f0c2230388a164    92K  39.8G      266M  legacy
mypool/containers/2446710cbd06512d9598269ef0eb4001e70f1ce70af47e3330e2c925d782288d   116K  39.8G     84.5M  legacy
mypool/containers/3382903366b24f345ae51a10ad02ba936281b3e5238c5669e59c12ddfb7c72c7   120K  39.8G      609M  legacy
mypool/containers/362154b593a4c60ba49068742c95e0bbbb6d255b501a600facfb24ef4565dcd0    72K  39.8G      266M  legacy
mypool/containers/37ebdb880f27bbdd960fde707136dff1b97db3c0346aefd2b237eb0f4308b22a    88K  39.8G      609M  legacy
mypool/containers/38324eccf5ac61acbe17dd1febff80b177fdd6ca872ed40198582730d4f28086  46.0M  39.8G      189M  legacy
mypool/containers/493a9cdda080ed04a9c341df8373eb57509865978d48e5a41a8e9685204aab24   526M  39.8G      609M  legacy
mypool/containers/51b767a497a33ae66b9ef26e6115a010969781e577c265f5e786b1acec3ed6ae    96K  39.8G      266M  legacy
mypool/containers/5d98959c93a02604fc131d74c1a13412a6ff9bb1999dc6d7b05cf32704b31a56  52.1M  39.8G     98.3M  legacy
mypool/containers/5f13d48a40461a0b50ce972faa6ed53907e9d265440bef10d8c9554d0cccee8d   526M  39.8G      609M  legacy
mypool/containers/77f4360ae2b52ade4b1272633153fc8e71f1c7238c5edcc7f4dc14d52bef1bf1  31.0M  39.8G      266M  legacy
mypool/containers/7e9df073c9d07bcbddcb6cca3580fdaf822142874b2cada63afe00d6012c5272  84.4M  39.8G     84.4M  legacy
mypool/containers/7fdcb7bcd07a8749b81616e1921c3a557e93bc286cbc88618f238fcff02895e4    72K  39.8G      368K  legacy
mypool/containers/83ce49406cc147ddd561c5a093e71a082122cacf4e4ab3699a1695c76c6c37a6  46.1M  39.8G      235M  legacy
mypool/containers/8c004456aeb58b75f792fa091b194c20d6ed4f0d95dd25b0150d71c5c9ab601b   360K  39.8G      360K  legacy
mypool/containers/a714894fd6330a4f287576419988eb2fe80e170004526cbbef1bee7cfd990d09   232K  39.8G      610M  legacy
mypool/containers/ab31ed08bb07ddf5114fd9ace372d21c9441bf2f4ccf5110f39ce1c85d45929a   116K  39.8G     84.5M  legacy
mypool/containers/aef268d688e5820d156ea20e119ad5995ad5477efbf7b887a50d4cc147ddd934    76K  39.8G      189M  legacy
mypool/containers/c0e5c047951147a952903acd94d5044a912a8390be3f3f26d3d066b275ae7234  3.98M  39.8G     3.98M  legacy
mypool/containers/d2614e6e67befb4b8f863dedf0366ceeb25fe37da5d055a82305afa416a56811  46.0M  39.8G     46.3M  legacy
mypool/containers/d7b5ece069de2921ccc12b602d0b4c26a178d4a7874d5daca68f3875ef8c6e9d   120K  39.8G      609M  legacy
mypool/containers/e43690f1d38ffcd71e73b2daaa391de95137928f2fab990d3dc930bdd4149e20  44.9M  39.8G      143M  legacy
mypool/containers/f6cf7e37f2381e0f1d73b4b7d440b9f80ae3f0050883d83455df3929a32713d3    88K  39.8G      609M  legacy
mypool/containerstorage                                                               96K  39.8G       96K  /mnt/var/db/containers
mypool/hikvision                                                                     212G  39.8G      212G  /mnt/mypool/hikvision
mypool/hikvision2                                                                    183G  39.8G      183G  /mnt/mypool/hikvision2
mypool/iocage                                                                       2.11G  39.8G     7.93M  /mnt/mypool/iocage
mypool/iocage/download                                                               256M  39.8G       96K  /mnt/mypool/iocage/download
mypool/iocage/download/13.2-RELEASE                                                  256M  39.8G      256M  /mnt/mypool/iocage/download/13.2-RELEASE
mypool/iocage/images                                                                  96K  39.8G       96K  /mnt/mypool/iocage/images
mypool/iocage/jails                                                                 1.20G  39.8G       96K  /mnt/mypool/iocage/jails
mypool/iocage/jails/openHAB3                                                         813M  39.8G      100K  /mnt/mypool/iocage/jails/openHAB3
mypool/iocage/jails/openHAB3/root                                                    813M  39.8G     1.44G  /mnt/mypool/iocage/jails/openHAB3/root
mypool/iocage/jails/podman_pkg                                                       417M  39.8G      108K  /mnt/mypool/iocage/jails/podman_pkg
mypool/iocage/jails/podman_pkg/data                                                   96K  39.8G       96K  /mnt/mypool/iocage/jails/podman_pkg/data
mypool/iocage/jails/podman_pkg/root                                                  417M  39.8G     1.06G  /mnt/mypool/iocage/jails/podman_pkg/root
mypool/iocage/log                                                                    108K  39.8G      108K  /mnt/mypool/iocage/log
mypool/iocage/releases                                                               670M  39.8G       96K  /mnt/mypool/iocage/releases
mypool/iocage/releases/13.2-RELEASE                                                  670M  39.8G       96K  /mnt/mypool/iocage/releases/13.2-RELEASE
mypool/iocage/releases/13.2-RELEASE/root                                             670M  39.8G      668M  /mnt/mypool/iocage/releases/13.2-RELEASE/root
mypool/iocage/templates                                                               96K  39.8G       96K  /mnt/mypool/iocage/templates


Snapshots are using almost no space:

Code:
zfs list -t snapshot
NAME                                                                                           USED  AVAIL     REFER  MOUNTPOINT
boot-pool/ROOT/default@2023-07-31-23:09:59                                                    12.0M      -     1.29G  -
mypool/containers/1a26d2434d4b926f1e8bf2437d89dbb4c22082cee1bdce7420f0c2230388a164@548731963     0B      -      266M  -
mypool/containers/2446710cbd06512d9598269ef0eb4001e70f1ce70af47e3330e2c925d782288d@255991240     0B      -     84.5M  -
mypool/containers/3382903366b24f345ae51a10ad02ba936281b3e5238c5669e59c12ddfb7c72c7@518647338     0B      -      609M  -
mypool/containers/362154b593a4c60ba49068742c95e0bbbb6d255b501a600facfb24ef4565dcd0@894557786     0B      -      266M  -
mypool/containers/38324eccf5ac61acbe17dd1febff80b177fdd6ca872ed40198582730d4f28086@14245846      0B      -      189M  -
mypool/containers/493a9cdda080ed04a9c341df8373eb57509865978d48e5a41a8e9685204aab24@699278155     0B      -      609M  -
mypool/containers/5d98959c93a02604fc131d74c1a13412a6ff9bb1999dc6d7b05cf32704b31a56@61726206      0B      -     98.3M  -
mypool/containers/5f13d48a40461a0b50ce972faa6ed53907e9d265440bef10d8c9554d0cccee8d@266023603     0B      -      609M  -
mypool/containers/77f4360ae2b52ade4b1272633153fc8e71f1c7238c5edcc7f4dc14d52bef1bf1@596257804     0B      -      266M  -
mypool/containers/7e9df073c9d07bcbddcb6cca3580fdaf822142874b2cada63afe00d6012c5272@884184236     0B      -     84.4M  -
mypool/containers/7e9df073c9d07bcbddcb6cca3580fdaf822142874b2cada63afe00d6012c5272@800820246     0B      -     84.4M  -
mypool/containers/7fdcb7bcd07a8749b81616e1921c3a557e93bc286cbc88618f238fcff02895e4@4729533       0B      -      368K  -
mypool/containers/83ce49406cc147ddd561c5a093e71a082122cacf4e4ab3699a1695c76c6c37a6@394368264     0B      -      235M  -
mypool/containers/8c004456aeb58b75f792fa091b194c20d6ed4f0d95dd25b0150d71c5c9ab601b@296056721     0B      -      360K  -
mypool/containers/ab31ed08bb07ddf5114fd9ace372d21c9441bf2f4ccf5110f39ce1c85d45929a@237276085     0B      -     84.5M  -
mypool/containers/aef268d688e5820d156ea20e119ad5995ad5477efbf7b887a50d4cc147ddd934@950578986     0B      -      189M  -
mypool/containers/d2614e6e67befb4b8f863dedf0366ceeb25fe37da5d055a82305afa416a56811@449722110     0B      -     46.3M  -
mypool/containers/d7b5ece069de2921ccc12b602d0b4c26a178d4a7874d5daca68f3875ef8c6e9d@463008925     0B      -      609M  -
mypool/containers/e43690f1d38ffcd71e73b2daaa391de95137928f2fab990d3dc930bdd4149e20@716737992     0B      -      143M  -
mypool/containers/f6cf7e37f2381e0f1d73b4b7d440b9f80ae3f0050883d83455df3929a32713d3@189114261     0B      -      609M  -
mypool/iocage/releases/13.2-RELEASE/root@podman_pkg                                            112K      -      668M  -
mypool/iocage/releases/13.2-RELEASE/root@openHAB3                                              104K      -      668M  -
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
zfs list will list all datasets. Anything not in there is a normal directory.
How do I know whether a mountpoint is "cleared" (and how do I clear it)?
Well, they all look fine in your pasted output above, so you can probably move past this part.
I'm not sure if the analogy applies to those missing files in my pool.
Not really. It's more of them not being available because the right disk is not connected to the system (in a purely conceptual sense).
Snapshots are using almost no space:
Careful, that's not what that output says. The space shown as "used" by each snapshot corresponds to the space that would be freed if the dataset were destroyed. If two snapshots point to the same block, it will not be accounted for in either of them. To see the space used by all snapshots of a dataset, you must look at the dataset's "usedsnap" property. Protip: zfs list -o space is a handy shortcut that shows the most relevant disk usage columns, instead of listing them out by hand (zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild).
 

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
Well, they all look fine in your pasted output above, so you can probably move past this part.
That's good. Thanks for clarifying. So I guess next is this:
Once the first point has been addressed, and if you find that some datasets are empty/emptier than they should be, you'll need to systematically unmount the datasets, rename the directories, mount the datasets, and copy the missing data over.
I think I am starting to understand better what's going on. I just unmounted mypool/NAS because I suspected that it was being mounted on top of the NAS directory in mypool, and - tadaah - that was the case. Lots of missing folders have reappeared.

Next step you suggest is "rename the directories, mount the datasets, and copy the missing data over." But my intuitive hunch was to simply change the mountpoint for mypool/NAS. Is there any problem with that? I'm guessing that your suggestion is geared toward consolidating the existing directory structure, which I may want to do in certain cases, but not mypool/NAS (because the contents of that dataset are simply misplaced. OMV put its backup there, I don't know why). So, if I'm understanding this correctly, changing the mountpoint should be a viable alternative, right?

For consistency I would probably also rename the dataset, but that should not be a problem either, right?

Not really. It's more of them not being available because the right disk is not connected to the system (in a purely conceptual sense).
I appreciate you staying in my analogy. So, just to clarify this further: by "not connected to the system", do you mean the dataset not being mounted or not being imported or something else?

Edit: Another quick question: given that the disappearance of "stuff" was due to one dataset being mounted on top of the folder of another and that the magic reappearance of some data after a reboot (and its subsequent re-disappearance after another reboot) was due to those datasets being mounted in a different order, how is this variation possible? How is the mount order determined?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
But my intuitive hunch was to simply change the mountpoint for mypool/NAS. Is there any problem with that?
Yes, two in fact:
  1. Unless you manually do a bunch of manual changes, you're just moving your tree of problems to a different spot
  2. TrueNAS assumes that data pools are mounted at /mnt
For consistency I would probably also rename the dataset, but that should not be a problem either, right?
That's fine. If you want different names for the datasets, you can just rename them and solve your issue that way.
I appreciate you staying in my analogy. So, just to clarify this further: by "not connected to the system", do you mean the dataset not being mounted or not being imported or something else?
Close to not mounted, hidden away by a directory [itself contained in a different dataset] of the same name. Or vice-versa, both situations can and have happened.
Edit: Another quick question: given that the disappearance of "stuff" was due to one dataset being mounted on top of the folder of another and that the magic reappearance of some data after a reboot (and its subsequent re-disappearance after another reboot) was due to those datasets being mounted in a different order, how is this variation possible? How is the mount order determined?
Not at all a bad question, but above my pay grade.
 

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
Unless you manually do a bunch of manual changes, you're just moving your tree of problems to a different spot
Could you explain?
Or I'll try to explain why I don't see the problem and you tell me where I'm going wrong: the problem that caused my stuff to become invisible is that mypool has a directory called NAS (with a lot of stuff in it). Unfortunately, there is also a dataset called mypool/NAS with /mnt/mypool/NAS as its mountpoint. So when mypool gets mounted to /mnt/mypool, its NAS folder ends up at /mnt/mypool/NAS. So far so good. But then mypool/NAS gets mounted at /mnt/mypool/NAS, essentially hiding the stuff from the mypool directory. That's why, when I do umount mypool/NAS, I get my stuff back.

So, given that my problem is the above behaviour, if I now do zfs set mountpoint=/mnt/OMV-backup mypool/NAS, won't that stop the above behaviour and hence solve my problem? What kind of problem am I moving to /mnt/OMV-backup?

TrueNAS assumes that data pools are mounted at /mnt
Why is that a problem?
 
Joined
Oct 22, 2019
Messages
3,641
Rather than playing with the mountpoints, you might have had better luck if you just checked / changed the altroot for the pool.
Code:
zpool get altroot mypool
Code:
zpool set altroot=/mnt mypool


But at this point, I'm tentative, since you already messed around with mountpoints, which should have been left at their defaults.

When you set the altroot for the pool, all datasets, including the root dataset, will mount in a predictable "nest" starting with /mnt/<poolname>

EDIT: This doesn't take into consideration if/how OMV veers away from defaults for its own ZFS usage.
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
Normally the mountpoint property of each dataset in inherited from the previous, and so forms a hierarchy. If you have manually set mountpount, it may conflict. Below will list duplicate mountpoints:
Code:
zfs list -r -o mountpoint -H | sort | uniq -c | awk '$1>1'

You might find canmount vs mounted interesting:
Code:
zfs get -r -t filesystem name,canmount,mounted,mountpoint

You could also look at the mountpoint property:
Code:
zfs get -r -t filesystem mountpoint

You could filter the above output by adding on the end of the command:
  • |grep -v default to remove normal hierarchical mountpoints.
  • |grep ' local$' to see there mountpoint has been manually specified
  • |grep inherited.from to see where normal hierarchy flows from a local mountpoint specification
There are also be legacy mountpoints which are application dependent and not automatically mounted by ZFS , ie containers. You also have at least at least 1 zvol, and I don't know how iocage is supposed to work. Anything Core specific I don't know about, so I will avoid comment.
 

Tistos

Cadet
Joined
Jul 26, 2023
Messages
8
zfs list -r -o mountpoint -H | sort | uniq -c | awk '$1>1'
The output for this is

Code:
  33 legacy
   2 none


When I limit it to the problematic pool, I get

Code:
zfs list -r -o mountpoint -H mypool | sort | uniq -c | awk '$1>1'
  23 legacy


All legacy mount points are from jail datasets and my understanding is that there is nothing problematic about these.

The reason why the command doesn't capture the problematic datasets is because the problem is not duplicate mount points but mount points overlapping with directories in other datasets of the same pool.

zfs get -r -t filesystem mountpoint
Here, the picture is very clear and simple: all mountpoints (except for the ones belonging to containers) are `default`

And now what?
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
If you need to map out where dataset mount points overlap directories, then you need to generate a directory lists for each dataset in isolation from the others. Then count the intersections and inspect.

I would export the pool, then zpool import -N -R /mnt mypool then for each dataset not a container/legacy mount, mount it and generate a sorted directory list.

I dont know if core allows you to use mount -o zfsutil -t zfs datasetpath mountpointto arbitrarily mount data sets to paths, as I would use this on scale to create a hierarchy of /tmp/mypool/dsnn/mountpointPath for each dataset, and I could leave them mounted until finished fixing. If you wanted to do this without zfsutil, then you could just set the mountpoint property of each dataset so you had /mnt/mypool/dsnn/mointpointPath but then you would have to reset later, and you would need to deal with /mnt/mypool first and separately so directory list doesn't include dsnn.

Adjust paths be using the appropriate prefix.
* Generare a list of candidate datasets and mountpoints from zfs list
* Mount all those datasets
* cd /prefix && find mountpointPath -type d -print | sort > /var/tmp/dsnn.dir

Now you have a list of all directories and can look for intersections. You may have to deal with a empty directory existing at the mountpoint and generating false positives. You can probably remove these but not sure about freebsd/core.

My first cut would be sort /var/tmp/ds* | uniq -c | sort -rn and look at > 1. You could also use comm to filter between pairs of files. Whatever data analysis tool you want, although concentrate on the mountpoints.

After you find issues you can rename conflicting directories and regenerate until you are happy. Then restore anything you have to restore, umount, export, then webui import. Iterate if you missed something.
 
Last edited:
Top