zfs send | receive creates mounts

seldo

Dabbler
Joined
Jan 4, 2021
Messages
47
Hello,

I'm new here as I am working on moving my personal system fro, NAS4Free (11.0.0.4) to TrueNAS.
To move my files from one system to another, I will be using my backup drive to replicate the content & structure from my RAID 10.
That is because I will have to destroy my mirror and the vdevs because I originally made 2 vdevs with one drive each.
(I guess I could also import one drive, copy to the content to the new vdev and build the mirror after... anyway... I have something I'd like to understand nonetheless!)

I connected my backup drive to a VM running TrueNAS. That's only for testing purposes.
The idea is to verify that TrueNAS will be able to read and import the content from my previous system.
However, I am facing issues with too many mounts, much more than what I was expecting!

All commands & results below are from TrueNAS.
The content has been created on NAS4Free.
I believe I would have had the same issue if creating/working with the pool on FreeNAS/TrueNAS.

Now onto my problem:
I was only expecting one mount: BACKUP_POOL => /mnt/BACKUP_POOL
Instead, I also have many other mountpoints/filesystems (?) that I believe I created by the zfs send | receive command.
I never created any filesystems other than the pool itself (BACKUP_POOL).
My main pool is called MAIN_POOL, and it is a mirror of two vdev with one disk each.

Here are my questions:
  1. Is this expected? Why do I see so many mounts?
  2. How to view what will mount?
    1. I understand that filesystems on the pool are mounted
    2. I guess that since MAIN_POOL is a filesystem, this property is kept, hence the multiple mounts.
    3. However, zpool list doesn't show any.
  3. How to manage this?
  4. How to do backup/replicate better to avoid such a mess?
    1. I will most likely keep the same structure, in the future, but creating a few filesystems inside the MAIN_POOL
Here's what zpool tells me:
Code:
root@truenas[~]# zpool import
   pool: BACKUP_POOL
     id: 4404527925045269142
  state: ONLINE
status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    BACKUP_POOL  ONLINE
      da0        ONLINE


Here are the pools as seen by TrueNAS:
Code:
root@truenas[~]# zpool list
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
BACKUP_POOL  5.44T  2.21T  3.22T        -         -     0%    40%  1.00x    ONLINE  -
boot-pool    15.5G  1.16G  14.3G        -         -     0%     7%  1.00x    ONLINE  -


Here are my mounts:
Code:
root@truenas[~]# zfs mount       
// system drive mounts are removed
BACKUP_POOL                     /mnt/BACKUP_POOL
BACKUP_POOL/backup_fs/MAIN_POOL  /mnt/MAIN_POOL
BACKUP_POOL/second_snap         /mnt/BACKUP_POOL/second_snap
BACKUP_POOL/MAIN_POOL_BACKUP    /mnt/BACKUP_POOL/MAIN_POOL_BACKUP
BACKUP_POOL/backup_fs           /mnt/BACKUP_POOL/backup_fs
BACKUP_POOL/backup_fs/MAIN_POOL/personnel  /mnt/MAIN_POOL/personnel
BACKUP_POOL/backup_fs/MAIN_POOL/guest_fs  /mnt/MAIN_POOL/guest_fs
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo  /mnt/MAIN_POOL/photovideo
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo/photo_incoming  /mnt/MAIN_POOL/photovideo/photo_incoming
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo/photos  /mnt/MAIN_POOL/photovideo/photos
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo/videos  /mnt/MAIN_POOL/photovideo/videos


Here's my pool history:
Code:
root@truenas[~]# zfs history BACKUP_POOL
History for 'BACKUP_POOL':
2017-05-12.03:28:43 zpool create -m /mnt/BACKUP_POOL BACKUP_POOL /dev/da0.nop
...
2017-05-12.08:49:51 zfs receive BACKUP_POOL/MAIN_POOL_BACKUP
2017-05-15.17:51:20 zfs receive BACKUP_POOL/second_snap
2017-05-16.16:51:28 zfs receive BACKUP_POOL/backup_fs/MAIN_POOL
...
2021-01-03.04:36:35 zpool export -f BACKUP_POOL


Thanks for welcoming me here.

Looking forward to your answers!
Seldo
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I was only expecting one mount: BACKUP_POOL => /mnt/BACKUP_POOL
Every dataset (including the pool root dataset which you say you expected to see) has the potential to have a mountpoint defined

When you replicate a dataset with zfs send | recv, you get an exact copy, including the attributes (one of which is mountpoint)... if you sent multiple child datasets with the -r option, their attributes are also transferred, so you will have as many mounts as datasets.

Using the Advanced GUI to create a replication task, you can opt out of specific attributes if you wish (although you would then later need to set those for each dataset to ensure your restore would work as expected.

I understand that filesystems on the pool are mounted
Every dataset is its own filesystem.
have a look with zfs get all BACKUP_POOL and zfs get all BACKUP_POOL_backup_fs
and compare the results. All of the listed attributes for each dataset are able to be set (maybe not after creation in some cases and not by you directly in others... like "mounted", which is system controlled) uniquely for that dataset

I guess that since MAIN_POOL is a filesystem, this property is kept, hence the multiple mounts.
It's a dataset with children, so if you replicate recursively, you get them all.

However, zpool list doesn't show any.
You might find what you're seeking with zfs list

How to manage this?
It depends what you ultimately want to see and how you intend to operate in case of a restore.

I would recommend either not being bothered by the additional mounts, or not replicating the mountpoint attribute (and in both cases I recommend using replication tasks rather than the command line as playing with attributes is a pain in CLI)

Since the original source is non-TrueNAS, it's going to be messy, since TrueNAS has middleware setup that helps with this exact problem, but can't do the work if it's not called.
 

seldo

Dabbler
Joined
Jan 4, 2021
Messages
47
Every dataset (including the pool root dataset which you say you expected to see) has the potential to have a mountpoint defined

When you replicate a dataset with zfs send | recv, you get an exact copy, including the attributes (one of which is mountpoint)... if you sent multiple child datasets with the -r option, their attributes are also transferred, so you will have as many mounts as datasets.

I wasn't aware of all the attributes as part of the exact copy. However that makes sense reading it.
As for the datasets to get their attributes transfered, that's only if the dataset was of type file system in the first place, right?
So having
Code:
pool/not/an/fs
would only expose pool
But having
Code:
pool/fs1/fs2
would expose pool fs1 fs2

I will have to look at MAIN_POOL history's tomorrow to figure out what I did with the filesystems: what is one and what is not.

Every dataset is its own filesystem.
have a look with zfs get all BACKUP_POOL and zfs get all BACKUP_POOL_backup_fs
and compare the results. All of the listed attributes for each dataset are able to be set (maybe not after creation in some cases and not by you directly in others... like "mounted", which is system controlled) uniquely for that dataset

It's a dataset with children, so if you replicate recursively, you get them all.

You might find what you're seeking with zfs list

Your answer answers my question: it shows my all the mountpoints on my pool
What I was not understanding was why specifying the pool zfs list BACKUP_POOL was then only showing BACKUP_POOL and nothing else.
What I missed was the -r as in zfs list -r BACKUP_POOL

As for setting attributes such as mountpoint if I decide to not replicate them, I found an example there: Managing ZFS Mount Points

It depends what you ultimately want to see and how you intend to operate in case of a restore.

I guess I was first attracted to saving the data at first, since once it is lost it is gone.
I was kind of keeping the restore part for the day it would really happen... hence currently missing the end of the story.

That's a good question though, and I have to say:
I don't know
Or rather:
I want to get my data back
but that would be it...
Let me search for information on the restore part!

Using the Advanced GUI to create a replication task, you can opt out of specific attributes if you wish (although you would then later need to set those for each dataset to ensure your restore would work as expected.

I would recommend either not being bothered by the additional mounts, or not replicating the mountpoint attribute (and in both cases I recommend using replication tasks rather than the command line as playing with attributes is a pain in CLI)

Since the original source is non-TrueNAS, it's going to be messy, since TrueNAS has middleware setup that helps with this exact problem, but can't do the work if it's not called.

My question came from seeing that I had MAIN_POOL mounted as /mnt/MAIN_POOL on the VM where only BACKUP_POOL is physically connected.
This made me freak out:
  1. why is MAIN_POOL mounted and how (that's what you've been answering above)
  2. on my NAS, when both pools MAIN_POOL and BACKUP_POOL are present an mounted, what does /mnt/MAIN_POOL connects to?
    I will check that tomorrow as it is getting late for starting a new activity of this type.
Regarding #2: I am fine having mountpoints attributes replicated if:
  1. they don't cause me trouble in my daily usage (as in: what is behind /mnt/MAIN_POOL ?)
  2. they are set in a way that make me do a restore without pain
As I will move to TrueNAS, I will use the replication tasks (and will read the doc before) and it should all fit in.
In the meantime, I will format BACKUP_POOL and replicate it with only one snapshot just to have a 1:1 for the time of the migration.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
As for the datasets to get their attributes transfered, that's only if the dataset was of type file system in the first place, right?
All datasets are filesystems.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
pool/not/an/fs
would only expose pool
When you mount a filesystem, the directory structure under it is also mounted, so what you're saying is true only if "expose" means mount. You would still see those subdirectories under the pool mount... as stated in my previous post, all datasets are filesystems, so making your statement happen would mean subdirectories, not child datasets.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
on my NAS, when both pools MAIN_POOL and BACKUP_POOL are present an mounted, what does /mnt/MAIN_POOL connects to?
That's an important question and there's no clear answer from this vantage point.. I have seen circumstances where the backup copy would become mounted first, so would prevent the true copy from mounting in that mountpoint.

It boils down to needing to handle it so that they don't try to mount (set canmount=noauto) or remove/change the mountpoint.
 

seldo

Dabbler
Joined
Jan 4, 2021
Messages
47
All datasets are filesystems.

I queried the history of MAIN_POOL and compared it to what I see in BACKUP_POOL.
That helped understanding the mountpoint attribute wrt. how the datasets were created.

Code:
bob: ~# zpool history MAIN_POOL | grep create
2017-05-15.00:49:32 zpool create -m /mnt/MAIN_POOL MAIN_POOL mirror /dev/ada0.nop /dev/ada1.nop
2017-05-15.00:50:56 zfs create -o aclinherit=restricted -o aclmode=discard -o atime=off -o casesensitivity=sensitive -o compression=off -o dedup=off -o sync=standard MAIN_POOL/photovideo
2017-05-15.00:52:58 zfs create MAIN_POOL/photovideo/photos
2017-05-15.08:58:39 zfs create MAIN_POOL/photovideo
2017-05-15.09:07:20 zfs create MAIN_POOL/guest_fs
2017-05-15.09:15:37 zfs create MAIN_POOL/photovideo/videos
2017-05-15.09:15:54 zfs create MAIN_POOL/personnel
2017-05-15.23:05:05 zfs create MAIN_POOL/photovideo/photo_incoming


and the mount points on MAIN_POOL:
Code:
bob: ~# zfs list -r -o name,mountpoint MAIN_POOL
NAME                                 MOUNTPOINT
MAIN_POOL                            /mnt/MAIN_POOL
MAIN_POOL/guest_fs                   /mnt/MAIN_POOL/guest_fs
MAIN_POOL/personnel                  /mnt/MAIN_POOL/personnel
MAIN_POOL/photovideo                 /mnt/MAIN_POOL/photovideo
MAIN_POOL/photovideo/photo_incoming  /mnt/MAIN_POOL/photovideo/photo_incoming
MAIN_POOL/photovideo/photos          /mnt/MAIN_POOL/photovideo/photos
MAIN_POOL/photovideo/videos          /mnt/MAIN_POOL/photovideo/videos


and the same mountpoints after replication in BACKUP_POOL:
Code:
BACKUP_POOL mountpoints
Haga:~ user$ zfs list -r -o name,mountpoint BACKUP_POOL
NAME                                                       MOUNTPOINT
BACKUP_POOL                                                /mnt/BACKUP_POOL
BACKUP_POOL/MAIN_POOL_BACKUP                               /mnt/BACKUP_POOL/MAIN_POOL_BACKUP
BACKUP_POOL/backup_fs                                      /mnt/BACKUP_POOL/backup_fs
BACKUP_POOL/backup_fs/MAIN_POOL                            /mnt/MAIN_POOL
BACKUP_POOL/backup_fs/MAIN_POOL/guest_fs                   /mnt/MAIN_POOL/guest_fs
BACKUP_POOL/backup_fs/MAIN_POOL/personnel                  /mnt/MAIN_POOL/personnel
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo                 /mnt/MAIN_POOL/photovideo
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo/photo_incoming  /mnt/MAIN_POOL/photovideo/photo_incoming
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo/photos          /mnt/MAIN_POOL/photovideo/photos
BACKUP_POOL/backup_fs/MAIN_POOL/photovideo/videos          /mnt/MAIN_POOL/photovideo/videos
 
Last edited:

seldo

Dabbler
Joined
Jan 4, 2021
Messages
47
When you mount a filesystem, the directory structure under it is also mounted, so what you're saying is true only if "expose" means mount. You would still see those subdirectories under the pool mount... as stated in my previous post, all datasets are filesystems, so making your statement happen would mean subdirectories, not child datasets.

By "expose", I meant displayed in zfs list.
But in the end it means that it is a dataset and therefore can have a mountpoint.

That's an important question and there's no clear answer from this vantage point.. I have seen circumstances where the backup copy would become mounted first, so would prevent the true copy from mounting in that mountpoint.

It boils down to needing to handle it so that they don't try to mount (set canmount=noauto) or remove/change the mountpoint.

I have three posibilities:
  1. do not copy mountpoint attribute on replication (mountpoint=none)
  2. mountpoint is copied but I then set set canmount=noauto
  3. I copy mountpoint, don't define canmount and I am in the situation I am today
With canmount=noauto, I still have my mountpoint set to the one from the original pool.
I'm just worried that it bites me later if I just allow the mount without verifying the mounpoint.

Picking 1. or 2., I'll need to update something before I can mount my backup.
I have to make up my mind on how I treat this.

For now, I'd say option #1.
Only having the backup pool mount so replication can happen is what will bother me less.
I can always set mounpoints later when I will need to.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Option 2 makes it easier to restore (with the same mountpoint as original), so has that advantage.
 

seldo

Dabbler
Joined
Jan 4, 2021
Messages
47
Option 2 makes it easier to restore (with the same mountpoint as original), so has that advantage.
Yes, it seems the most practical way.
I just have to write myself a note to make sure to check what is mounted before changing the canmount property.

Let me practice that by making my backup before moving systems. I'll then get the chance to restore it on the new system (to a TrueNas-created pool).
If I use TrueNAS replication on a VM, would I be able to see the commands used so I can manually replicate them on my current system?

And to restore, do I just do it the opposite way? That's what I will have to do
As if all is manual I backup with:
Code:
zfs send A | receive B

and restore with:
Code:
zfs send B | receive A

+ setting canmount=yes
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
And to restore, do I just do it the opposite way? That's what I will have to do
Right, but you will need to also set readonly=off (if you did it with replication and kept that default).
 

q/pa

Explorer
Joined
Mar 16, 2015
Messages
64
I am testing send | receive operations myself, right now. I plan to
Code:
zfs send -Rv pool/dataset@snap | zfs receive -o mountpoint=/mnt/remotebackup -o readonly=on -euv remotebackuppool


When restoring I will use
Code:
zfs send -Rvb remotebackuppool/dataset@snap | zfs receive -euv pool


During testing, after backup & restore all original properties were like before. I think, mainly due to the -b option on sending back. The user performing these operations needs the according zfs permissions, of course.
 
Top