Replication "up to date" but I see no files on target

Status
Not open for further replies.

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
I have a source server (FS10) replicating to a destination (FS09), one single dataset. Currently FS10 says "up to date" and has been replicating data for months now.

I go on to FS09 to check the data and the directory is empty, it's not currently replicating so what gives? mount + zfs mount show it mounted properly, the GUI (on FS09) shows it has data. I have another set of servers in the identical configuration that I can browse the data fine..

upload_2016-9-1_13-35-54.png

but DF shows nothing is being used? (top command is zfs list)

upload_2016-9-1_13-38-41.png


Here you can see it empty..

upload_2016-9-1_13-42-46.png
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,477
Do you have "recursive" selected in your replication settings?
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,477
Can you please post screenshots for your replication settings and maybe a screenshot of your conplete storage layout.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
Can you please post screenshots for your replication settings and maybe a screenshot of your conplete storage layout.

Storage layout is just 3 vdevs in a raidz3 all healthy no errors.

Here are my repl settings, I have another server with the same settings replicating to a target that I can see the data (on the same OS version as well)


upload_2016-9-1_15-42-45.png
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Storage layout is just 3 vdevs in a raidz3 all healthy no errors.
That doesn't make sense. RAIDZ3 only applies to vdevs. And having only three drives doesn't make sense for RAIDZ3. Can you please clarify.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
That doesn't make sense. RAIDZ3 only applies to vdevs. And having only three drives doesn't make sense for RAIDZ3. Can you please clarify.

No it doesn't make sense, my mistake. I have 1 pool consisting of 3 raidz3 vdevs. I'll just post a screenshot.

upload_2016-9-1_16-23-11.png
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Now that makes sense ;)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Since your source is storage01/dbbackups/something-db180, recursive is irrelevant. What is the contents of /mnt/storage01, since that is your destination? Is that the same as your other systems? I would have guessed that the source and destination pool/dataets would be similar. Meaning you are telling it to replicate the contents of something-db180 into the dataset location called storage01. So try putting it in storage01/dbbackups/something-db180.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
Since your source is storage01/dbbackups/something-db180, recursive is irrelevant. What is the contents of /mnt/storage01, since that is your destination? Is that the same as your other systems? I would have guessed that the source and destination pool/dataets would be similar. Meaning you are telling it to replicate the contents of something-db180 into the dataset location called storage01. So try putting it in storage01/dbbackups/something-db180.

/mnt/storage01 contains dbbackups (on the remote location currently). The folder is there but it should contain something-db180(the sub folder on the source with all the data). I agree recursive is currently irrelevant but shouldn't interfere with anything right?

I think I see what you are saying.. my one server that is working is replicating

storage01/dbbackups to "storage01" dataset on the remote side.

The three that aren't working properly are replicating

storage01/dbbackups/something-dbxxx to "storage01" dataset on the remote side

So should I be replicating the later one to storage01/dbbackups? does it hate the fact the source has a subfolder?

The thing is when I started replicating the dataset appeared on the destination, if you look at the very first image I posted you can see it actually replicated the data and to the correct location (dbbackups/something-dbxxx).. the data is just not appearing on the file system for some reason.

Hopefully all that makes sense. Just trying to determine if this is user error or a bug :P.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
It is possible for a filesystem directory to clobber a dataset mountpoint. In this case, it's possible that you are seeing the directory contents of /mnt/storage01/dbbackups not the contents of the dbbackups mounted at that location.
storage01/dbbackups to "storage01" dataset on the remote side.
What else is in the /mnt/storage01 directory? Any chance the something-db180 stuff is there?
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
It is possible for a filesystem directory to clobber a dataset mountpoint. In this case, it's possible that you are seeing the directory contents of /mnt/storage01/dbbackups not the contents of the dbbackups mounted at that location.

What else is in the /mnt/storage01 directory? Any chance the something-db180 stuff is there?

Nothing, just an empty dbbackups directory and no subfolder underneath it. So I'm not sure where its sending these files.. it's keeping up to date with any changes on the source and pruning accordingly. It's just not displaying anything so it feels like a mounting issue.

The mount shows up as dbbackups but not dbbackups/something-db180, I wonder if there is a way to fix this.

upload_2016-9-1_17-32-56.png
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The mount shows up as dbbackups but not dbbackups/something-db180, I wonder if there is a way to fix this.
What is that the output of?

What is the output of zfs get mountpoint storage01/dbbackups/something-db180?

dbbackups isn't the same as something-db180.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
What is that the output of?

What is the output of zfs get mountpoint storage01/dbbackups/something-db180?

dbbackups isn't the same as something-db180.

It shows it mounted + snapshots up to date.. this is why I am baffled

upload_2016-9-2_12-38-47.png
 

Attachments

  • upload_2016-9-2_12-36-37.png
    upload_2016-9-2_12-36-37.png
    32.7 KB · Views: 242

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
Anyone else have an idea? To add to the mix.. here is my splunk graph of all my Freenas boxes in "TB Free"

You can see the source/destination servers are properly sync'd, including the snapshot purge on aug 31st

upload_2016-9-6_10-20-10.png
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
It is possible for a filesystem directory to clobber a dataset mountpoint. In this case, it's possible that you are seeing the directory contents of /mnt/storage01/dbbackups not the contents of the dbbackups mounted at that location.
Did you confirm that this isn't the case?
Have you confirmed that the dataset is mounted? "zfs mount storage01/dbbackups/something-db180"
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
Did you confirm that this isn't the case?
Have you confirmed that the dataset is mounted? "zfs mount storage01/dbbackups/something-db180"

The subfolder (something-db180) is not mounted, the root of it is mounted (storage01/dbbackups).

Should I have set the remote replication path to storage01/dbbackups? instead of just storage01?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The subfolder (something-db180) is not mounted, the root of it is mounted (storage01/dbbackups).
If the ZFS dataset isn't mounted, then you won't see it's contents. Even if the root is mounted, the sub-dataset must also be mounted (this happens automatically at boot or during import).
Should I have set the remote replication path to storage01/dbbackups? instead of just storage01?
Yes. I've found that replicating into the root dataset causes issues, and it's safer and cleaner to replicate into a specific sub-dataset used for backups.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
If the ZFS dataset isn't mounted, then you won't see it's contents. Even if the root is mounted, the sub-dataset must also be mounted (this happens automatically at boot or during import).

Yes. I've found that replicating into the root dataset causes issues, and it's safer and cleaner to replicate into a specific sub-dataset used for backups.
If the ZFS dataset isn't mounted, then you won't see it's contents. Even if the root is mounted, the sub-dataset must also be mounted (this happens automatically at boot or during import).

Yes. I've found that replicating into the root dataset causes issues, and it's safer and cleaner to replicate into a specific sub-dataset used for backups.

Ugh, so am I going to have to re-replicate or can I some how re-mount? the data has been copied over, it has to be somewhere.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Ugh, so am I going to have to re-replicate or can I some how re-mount? the data has been copied over, it has to be somewhere.
If you knew where your data was, it's possible to replicate it locally and then change the freenas job. but since it's not clear and it seems like there have been other issues. my suggestion would be to destroy the target datasets and start over.

But, if you want to keep trying, provide the full output of "zfs list" and "zfs mount" (in code tags). You can replace any sensitive names with placeholder text, so we know what to call it.
 
Status
Not open for further replies.
Top