Sending/receiving a snapshot, still empty dataset

Status
Not open for further replies.

Patrol02

Dabbler
Joined
Sep 11, 2016
Messages
15
Hello,

I am implementing a "backup" solution which periodically sends snapshots from one pool to another.

It boils down to theese commands:

Code:
zfs send -Ri $dst_snapshot_name $latest_src_snapshot | zfs receive -Fduv $dst_pool
zfs rollback $new_dst_snapshot


but you can see the full script here if you want:
https://github.com/AlexeyRaga/freenas-scripts/blob/master/backup_dataset.sh

From what I can see, sending/receiving a snapshot works: I see the right amount of space is being taken on the target pool, and zfs list -t snapshot also displays the new snapshot with the correct size:

Code:
freenas% zfs list -t snapshot | grep backup
backup_pool/client_photos@auto-20170305.1808-3d					   8K	  -  1.84T  -


However, when I sudo ls /mnt/backup_pool/client_photos/ that dataset I see nothing in it.

I tried doing `sudo zfs rollback backup_pool/client_photos@auto-20170305.1808-3d` a couple of times and it doesn't help, the dataset still appears to be empty.

Can you explain me why and what do I need to do to see the content?
 

remonv76

Dabbler
Joined
Dec 27, 2014
Messages
49
Why aren't you using the replication and snapshot function in the WebGUI? It does exactly the same and works perfectly. I use it also in my system to replicate snapshots to another zpool consisting a 6TB mirror.
All you have to do is copy the pub.key in to the root user and setup snapshots and replication to the localhost.
 

Patrol02

Dabbler
Joined
Sep 11, 2016
Messages
15
I could I guess, but I didn't see why do I need to go through SSH while I am on the same machine... Also it helps my understanding of how things work :)
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
I wonder the same - I have some replicated snapshots where the files are just there after sending them, others need a clone of the snapshot as they don't show up in the target directory...
O/c all snapshots where created & sent identically.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Very good idea but unfortunately not the solution for my issue:
Code:
zfs get readonly tank/main/unix
NAME  PROPERTY  VALUE  SOURCE
tank/main/unix  readonly  off  default

/mnt/tank/main# zfs get readonly tank/main/unix/nakivo
NAME  PROPERTY  VALUE  SOURCE
tank/main/unix/nakivo  readonly  off  default

cat mount.today |grep nakivo
tank/main/unix/nakivo  /mnt/tank/main/unix/nakivo zfs  rw,nfsv4acls  0 0

ls -ltr |grep nakivo
drwxr-xr-x  2 root  wheel  2 Jan 17 10:56 nakivo/
drwxr-xr-x  3 1003  1003  3 Jan 17 11:16 nakivo-migrate-clone/

ls nakivo
./  ../
/mnt/tank/main/unix# ls nakivo-migrate-clone/
./  ../  NakivoBackup/

nakivo-migrate-clone is o/c the clone of the nakivo snapshot


Edit:
I destroyed the nakivo snapshot and retransmited and this time it was working as expected.
It might be that the parent snapshot (unix) was not present at the time the nakivo snapshot started or finished and this might have caused the issue...
Will need to monitor snapshots...
 
Last edited:
Status
Not open for further replies.
Top