Replication causing datasets to act unmounted

Status
Not open for further replies.

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
did you run the ls command to see if the file system was there? If it's not there, the simple test is to mount that dataset and see if there errors stop (zfs mount freenas-backup/example).
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
did you run the ls command to see if the file system was there?

I did, it said it was not there. While I could just mount it again, the problem is I replicate every night and the errors start immediately after the replication finishes. I think moving the replication to within a dataset of the pool on PULL will solve things as you suggested.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Most if not all unmounted dataset I have encounetered were related to either long path names (dataset name too long) or combination of dataset name length and file name.
Instead of replicating, just run the zfs rename command or zfs move command.
The idea is to reduce the length of the path.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
I wanted to wait a couple of nights for my replication tasks to run and I can report that all statvfs messages are no longer appearing. I was actually also having another problem where my previous setup I had created another dataset within my top level dataset that was no on the PUSH box and it has nothing in it except SSH keys. Well it seemed to get written over after every replication because the SSH would be empty next time I tried logging in through the CLI. That problem is also fixed.

So just for clarification. I followed @depasseg and changed the PUSH box to replicate not the top level dataset on PULL, but a child dataset. Example:

Previous setup with problems:
PULL Machine - received replicated data at mnt/tank-backup

New setup problem free:
PULL Machine - receives replicated data at mnt/tank-backup/replicate

Thanks guys!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Glad to hear!
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Ok so just to be clear I want to understand if what I'm experencing is expected behavior on 9.10 or if something is wrong. I have a Push System that serves shares etc. A pull system with a sole purpose of being an offsite backup.

Push takes perodic recursive snapshots of /mnt/tank1/datasetA/datasetAA/ etc. they are replicated to a brand new unmolested Pull box via the GUI. There are some random errors during the replication process but once the data finishes its initial transfer, subsequent replications look good, and the Alert box is green for both push and pull. Scrubs come out clean.

Here's where it may be behaving an a way that i find unintuitive. In the web gui on Pull the Storage tab lists the pools and data sets as I would expect. However if I go to share a data set and try to browse for it I only see the pool available. If I ssh into the box and do the following...
Code:
[root@pull] ~# cd /mnt
[root@pull] /mnt# ls
./  ../  md_size  tank2/
[root@pull] /mnt# cd tank2
[root@pull] /mnt/tank2# ls
./  ../  deathstar/ xenssd/
[root@pull] /mnt/tank2# cd deathstar/
[root@pull] /mnt/tank2/deathstar# ls
./  ../


This last LS is whats unexpected to me. I would expect to see a listing of the sub data sets.

As such the question is, is the senario I've described the expected behavior or is there something I need to troubleshoot. Also, if it is the expected behavior how do I go about doing things like verifying that the backups work, or restoring data. Do I clone a snapshot to a dataset and then access it from there?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
As far as I know, if the dataset is not mountedit's name and content will not be visible with the ls command.
If you have the snapshot, then you have your data.
As I have reported a few times, I suspect the issue is related to the dataset character length including any file name exceeding a certain limit. If that length is too great, it will prevent mouting of the dataset.
A quick and easy way to check this, is by moving the dataset.
This is done as follow:
Code:
zfs rename tank2/deathstar/your-dataset tank2/your-dataset

Make sure you limit the dataset length to it's minimum.

You may need to reboot your system or mount the dataset f content is still not available.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
As far as I know, if the dataset is not mountedit's name and content will not be visible with the ls command.
If you have the snapshot, then you have your data.
As I have reported a few times, I suspect the issue is related to the dataset character length including any file name exceeding a certain limit. If that length is too great, it will prevent mouting of the dataset.
A quick and easy way to check this, is by moving the dataset.
This is done as follow:
Code:
zfs rename tank2/deathstar/your-dataset tank2/your-dataset

Make sure you limit the dataset length to it's minimum.

You may need to reboot your system or mount the dataset f content is still not available.

Thank you for your quick response to the question. I don't feel like this is the case. I'm thinking it may be the expected behavior of a dataset thats marked read only, which the ones on the pull system happen to be.

If it were the name length issue, I would expect to see the same behavior on the Push system as well. However I will investigate both your suggestion and turning the dataset to RW temporarilly to test via.

Code:
zfs set readonly=off dataset_name


If it is the read only attribute, what I also don't understand is why it behaves differently than read only. In my experience readonly file systems are just that, you can 'ls' them 'cat' files etc. you just cant modify them. That doesn't seem to be the case here.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
can confirm that recursively marking datasets to RW
Code:
 zfs set readonly=off dataset_name
allows for the functionality that I expect(and think used to exist in the past). The current implementation if working as expected is not intuitive to me at all. I may need to leave it RW and just be cognizant of the the edge cases and be sure to mark shares RO.[/code]
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I think I had a case where dataset were mounted fine on reboot. At some point, I did add or move some source code and the folder structure and files names were so long that on reboot, the combination would cause the dataset to become unmountable.
Doing the ZFS rename allowed me to regain access to the dataset.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You can check the mountpoints of your sub sub datasets with 'zfs get mountpoint dataset/subdataset'.
 
Status
Not open for further replies.
Top