SOLVED cannot mount '{directory}': failed to create mountpoint

Status
Not open for further replies.
Joined
Apr 19, 2017
Messages
8
Hello all,

I've been migrating my FreeNAS setup from one server to another. Version 9.10.2-U2 on both. New server is a fresh install. Scrubbed and exported the volume on old server. Go to import volume on new server, and get "cannot mount '/mnt/tank1/backups/vsan2/FreeNAS-config': failed to create mountpoint" error, and GUI does not show imported volume at all, though a "zpool list" shows it. Data is accessible (via command line) for other imported directories.

"vsan2" is a directory that is replicated from another server. I believe it is set read-only on the exported volume. All other data appears there, except the mulitple directories below "vsan2".

Anyone seen this before?
 
Joined
Apr 19, 2017
Messages
8
To add some additional information:
Plenty of free space (~6 TB)
Old system was Supermicro X8-SI6F, 24GB ECC, LSI2008
New system Dell R510, 24GB ECC, H200 (reflashed to LSI2008)
All (6) drives are Seagate Constellation ES.3 4TB SAS
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Does FreeNAS-config directory exist on the vsan2 dataset?
What is the result of zfs get readonly tank1/backups/vsan2?
 
Joined
Apr 19, 2017
Messages
8
It appears that the dataset VSAN2 does not exist on the import volume. It did exist, (and all the child datasets), before the export on the old server, and on the source of the replication.
It is not readonly on the source, but is on the replicated destination before the export.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Is vsan2 shown when you run zfs list?
 
Joined
Apr 19, 2017
Messages
8
Yes. However the directory does not appear to be mounted.

Code:
NAME													  USED  AVAIL  REFER  MOUNTPOINT
freenas-boot											  654M  28.2G	64K  none
freenas-boot/ROOT										 647M  28.2G	29K  none
freenas-boot/ROOT/9.10.2-U2							   647M  28.2G   637M  /
freenas-boot/ROOT/Initial-Install						   1K  28.2G   636M  legacy
freenas-boot/ROOT/default								 135K  28.2G   637M  legacy
freenas-boot/grub										6.35M  28.2G  6.35M  legacy
tank1													4.40T  6.13T	23K  /mnt/tank1
tank1/.system											91.3M  6.13T	25K  legacy
tank1/.system/configs-5ece5c906a8f4df886779fae5cade8a5   23.8M  6.13T  23.8M  legacy
tank1/.system/cores									   834K  6.13T   834K  legacy
tank1/.system/rrd-5ece5c906a8f4df886779fae5cade8a5	   37.0M  6.13T  37.0M  legacy
tank1/.system/samba4									 1.11M  6.13T  1.11M  legacy
tank1/.system/syslog-5ece5c906a8f4df886779fae5cade8a5	28.5M  6.13T  28.5M  legacy
tank1/backups											12.7G  6.13T	23K  /mnt/tank1/backups
tank1/backups/vsan2									  12.7G  6.13T	23K  /mnt/tank1/backups/vsan2
tank1/backups/vsan2/FreeNAS-config						220M  6.13T   220M  /mnt/tank1/backups/vsan2/FreeNAS-config
tank1/backups/vsan2/iSCSI.VSAN2.VSPHERE				  12.5G  6.13T  12.4G  -
tank1/jails											  9.07G  6.13T  32.5K  /mnt/tank1/jails
tank1/jails/.warden-template-pluginjail				   410M  6.13T   410M  /mnt/tank1/jails/.warden-template-pluginja			 il
tank1/jails/nextcloud_1								  5.20G  6.13T  5.09G  /mnt/tank1/jails/nextcloud_1
tank1/jails/plexmediaserver_1							3.47G  6.13T  3.55G  /mnt/tank1/jails/plexmediaserver_1
tank1/vsan1											  4.38T  6.13T	25K  /mnt/tank1/vsan1
tank1/vsan1/Development								  11.4G  6.13T  11.4G  /mnt/tank1/vsan1/Development
tank1/vsan1/FreeNAS-config								220M  6.13T   220M  /mnt/tank1/vsan1/FreeNAS-config
tank1/vsan1/Media										3.70T  6.13T  3.70T  /mnt/tank1/vsan1/Media
tank1/vsan1/Software									  493G  6.13T   493G  /mnt/tank1/vsan1/Software
tank1/vsan1/Users										20.2G   480G	23K  /mnt/tank1/vsan1/Users
tank1/vsan1/Working									   161M  6.13T   161M  /mnt/tank1/vsan1/Working
tank1/vsan1/iSCSI.VSAN1.TEST							 40.6G  6.17T	12K  -
tank1/vsan1/iSCSI.VSAN1.VSPHERE						   130G  6.13T   130G  -
tank1/vsan1/tftp										  180K  6.13T   164K  /mnt/tank1/vsan1/tftp
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
The data is there - so it most likely has to do with the datasets being made readonly prior to the mount points being created.

What is the result of zfs get readonly tank1/backups and what is the directory listing of /mnt/tank1/backups?
 
Joined
Apr 19, 2017
Messages
8
Code:
[root@vsan1] ~# ll /mnt/tank1/
total 8
drwxrwxr-x  5 root	   wheel	5 Jan 21 13:25 ./
drwxr-xr-x  4 root	   wheel  192 Apr 19 13:15 ../
drwxrwxr-x  3 root	   wheel	3 Jan 21 13:39 backups/
drwxr-xr-x  9 root	   wheel	9 Jan  6 17:13 jails/
drwxrwxr-x  9 user home	 9 Jan  9 15:05 vsan1/
[root@vsan1] ~# ll /mnt/tank1/backups/
total 2
drwxrwxr-x  3 root  wheel  3 Jan 21 13:39 ./
drwxrwxr-x  5 root  wheel  5 Jan 21 13:25 ../
drwxr-xr-x  2 root  wheel  2 Jan 21 13:39 vsan2/
[root@vsan1] ~# ll /mnt/tank1/backups/vsan2
total 1
drwxr-xr-x  2 root  wheel  2 Jan 21 13:39 ./
drwxrwxr-x  3 root  wheel  3 Jan 21 13:39 ../
 
Joined
Apr 19, 2017
Messages
8
Code:
[root@vsan1] ~# zfs get readonly tank1/backups
NAME		   PROPERTY  VALUE   SOURCE
tank1/backups  readonly  off	 default
[root@vsan1] ~# zfs get readonly tank1/backups/vsan2
NAME				 PROPERTY  VALUE   SOURCE
tank1/backups/vsan2  readonly  on	  local
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
You can try the following commands:

Code:
zfs set readonly=off tank1/backups/vsan2
mkdir FreeNAS-config
mount -a
zfs set readonly=on tank1/backups/vsan2
 
Joined
Apr 19, 2017
Messages
8
So, I ran
Code:
zfs set readonly=off tank1/backups/vsan2
zpool export tank1

And reimported the volume via the web interface, and there were no issues importing. THANK YOU.

I'm still somewhat concerned that some interaction with setting up the replications made the parent directory Readonly, and thus caused this blocking issue.
 
Last edited by a moderator:

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
So, I ran
I'm still somewhat concerned that some interaction with setting up the replications made the parent directory Readonly, and thus caused this blocking issue.

The replication tasks in FreeNAS check and see if the target dataset exists - if not, it will create them with the readonly attribute. If the dataset is nested, and the parent dataset is not on the remote server, it will also create the parent dataset with the readonly attribute set. What happens in that case, is the parent dataset is set to readonly BEFORE the child dataset is created, so the mount point directory cannot be created.

The data still transfers, it just can't mount. The work-around would be to create all the parent datasets first, or replicate the parent datasets as well.

I've reported this as a bug a while back - I'm unclear if it is actually being fixed though. Also, the fix I proposed will only work with one nested dataset - nested sister datasets that are to be replicated will still not be mounted. I now think a better solution would be to create the parent datasets readonly=off when the parent dataset is not part of the actual replication.

https://bugs.pcbsd.org/issues/20971
 
Status
Not open for further replies.
Top