backups appear to be successful but not actually visible from ssh on the disk

Joined
Aug 10, 2018
Messages
46
Hi all,

I have followed the semi-automated instructions for creating a backup of a dataset freenas to freenas. The backup appears to be successful as far as I can tell from the webGUI interface of both source and destination (if I view storage on the destination machine, all the sub-datasets appear and appear to be occupying the expected amount of diskspace. however, if I ssh into the backup machine and cd/ls in the dirs - none of the data appears to be there. Where is my data? Is this expected behaviour? Freenas 11.1-U6
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What console output do you see that leads you that conclusion?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I don't know the procedure you are referring to, but if it is through replication, then you will need the dataset to be mounted. Either the volume is set as readonly and will not be mounted regardless, until it is set as readonly=off and then mounted.
To mount it, you can restart your backup server or you can detach and attach it again.
Only then will the data be accessible through "ls".
 
Joined
Aug 10, 2018
Messages
46
root@freenas-backup:/mnt # du -sh backup-volume/
61K backup-volume/

root@freenas-backup: /mnt/backup-volume/backup-volume # ls
[all of the directories that form top level datasets on the origin system, as expected]

if I cd into one of the directory/datasets and ls, there is nothing there.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Only reliable means of checking your replication is successful is to list the snapshots that have been replicated. They must be the same as the source.
 
Joined
Aug 10, 2018
Messages
46
OK Grand, well looking under Storage -> Snapshots everything appears to be present and correct. One minor thing is I note that the backups are accumulating. Is there a way that I can have them expire after a certain amount of time? Like a month or something?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
@ezekiel.incorrigible, What do you call backup? Are you referring to the snapshot itself?
If that's so, you need to modify your automatic snapshot lifetime.
Having a lot of snapshot may or may not significantly affect your system performance under certain condition.
If you create snapshots with a lifespan of a few days or weeks, the replication to a backup will need to be done within that time frame if you still want to perform incremental replication.
On the backup side, the snapshot may or may not expire on its own. This is dependent on some settings.
If the backup never expires old snapshots, you can force it to do it when you run incremental replication next time. There is an option to destroy old snapshot on the remote side.
Snapshot do take some space but do not contains any of your data. Just the location of the blocks that were added or removed since the last snapshot.
Literrally, you could very well have millions of snapshot on your system.

Last note, when you use "Storage => Snapshots", make sure you list the proper Pool or dataset using the filter.

Personally, I prefer using the CLI to run the query:

zfs list -t snapshot -r pool
and save the output into a file I can look at.
 
Joined
Aug 10, 2018
Messages
46
@ezekiel.incorrigible, What do you call backup? Are you referring to the snapshot itself?
If that's so, you need to modify your automatic snapshot lifetime.
Having a lot of snapshot may or may not significantly affect your system performance under certain condition.
If you create snapshots with a lifespan of a few days or weeks, the replication to a backup will need to be done within that time frame if you still want to perform incremental replication.
On the backup side, the snapshot may or may not expire on its own. This is dependent on some settings.
If the backup never expires old snapshots, you can force it to do it when you run incremental replication next time. There is an option to destroy old snapshot on the remote side.
Snapshot do take some space but do not contains any of your data. Just the location of the blocks that were added or removed since the last snapshot.
Literrally, you could very well have millions of snapshot on your system.

Last note, when you use "Storage => Snapshots", make sure you list the proper Pool or dataset using the filter.

Personally, I prefer using the CLI to run the query:

zfs list -t snapshot -r pool
and save the output into a file I can look at.

by backup I mean snapshot replication - the snapshots expire properly on the origin system but are just accumulating forever on the replication/backup system and I'd like to know how to have them expire after a certain period of time.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
by backup I mean snapshot replication - the snapshots expire properly on the origin system but are just accumulating forever on the replication/backup system and I'd like to know how to have them expire after a certain period of time.
I think this is an intended behavior.
There is a way to do Recursive snapshot which will delete stale snapshots. This is a double sided edge, and the reason is at follow:
If you perform replication to a backup drive, you want to keep as much of the history as possible. If at some point you make a mistake or as part of an upgrade you lose one or more datasets and related snapshots, destroying stale snapshots will effectively indher possible recovery from backup.
On the other end, I fully understand the reason behind removing stale snapshot when dataset and file content is properly managed.
There was a scenario for witch I have been playing with a while back.
As you know, replication and snapshot lifespan is based on the date. If for some reason, your clock is set far into the future because you entered the wrong year or simply because of a bug or an internet attack, you could lose most your snapshots in no time.
For that manual replication can prevent such type of disaster as they don't get destroyed automatically.
If you are only interested in making an exact copy of your server to the backup drive by removing stale snapsots, then you can run the zfs commend n the receive side with the "-d" option, I think, or it could be "-F".

You shoudl read upon those options and find out for yourself which one makes the most sense and is suitable to you.
 
Top