Help with incremental snapshots

Status
Not open for further replies.

NASbox

Guru
Joined
May 8, 2012
Messages
644
I am attempting to create a removable backup system that will replicate my datasets to removable pool in a hot swap cartridge. After spending hours with the man pages and online research, I still need a bit of help. I've tried to answer my own question below to the best of my ability, and I'm hoping someone with ZFS experience can fill in a few blanks or spot obvious errors.

I've figured out that I can replicate incremental snapshots (on a pool with no datasets) with a command like this:

zfs send -vi TANK@SNAP TANK@NOW | zfs receive -v BACKUP/TANK

Where:
BACKUP: The removable pool
TANK: The working pool on FreeNAS
SNAP: The last snapshot that was backed up and is already on BACKUP
NOW: The current state to backup.
(@SNAP & @NOW are placeholders for unique timestamped names can be maintained over time.)

I can't afford the luxury of a ton of intermediate snapshots i used '-i'.

I need to do the same thing with a pool structure like this (but split it up into multiple disks due to size):

TANK [HANDFULL OF FILES-I COULD EASILY CREATE ANOTHER DATASET]
TANK/DATASET1 [3TB] ->DISK1
TANK/DATASET2 [4TB] ->DISK1
TANK/DATASET3 [6TB] ->DISK2

I'm assuming that backing up TANK (the root of the pool) on both drives would facilitate (or may even be necessary as a target for ) backup.

What about restoration?

The backed up snapshots of TANK will likely be at different times for BACKUP 1 & 2-i.e. @SNAP would actually be @SNAP1 @SNAP2.
As long as I keep both @SNAP1 & @SNAP2 on TANK will that be a problem?
Will @SNAP1 & @SNAP2 on TANK (or the backups) grow significantly as a result of changes in the child datasets?
Can I restore DATASET1,2,3 without reference to a backup snapshot of the root TANK?


My best guess at how to proceed:

For ease of illustration I'm just calling the BACKUP pool BACKUP.
(I actually plan on having the name changed automatically to BACKUP01, BACKUP02, etc, but only one drive will be mounted at any time, so SNAP and NOW will likely translate into script variables.

I had a lot of difficulty figuring out exactly what target would receive what snapshot, but I'm thinking the correct way to start the process is to create initial snapshots like this:

zfs snapshot TANK@SNAP
zfs snapshot TANK/DATASET1@SNAP
zfs snapshot TANK/DATASET2@SNAP
zfs snapshot TANK/DATASET3@SNAP

Create BACKUP 1
zfs snapshot TANK@SNAP | zfs receive BACKUP
zfs send zfs -vi snapshot TANK/DATASET1@SNAP | zfs receive BACKUP
zfs send zfs -vi snapshot TANK/DATASET2@SNAP | zfs receive BACKUP

Create BACKUP 2
zfs snapshot TANK@SNAP | zfs receive BACKUP
zfs send zfs -vi snapshot TANK/DATASET3@SNAP | zfs receive BACKUP

What do I do to send the incremental snapshots? Will this work:

Incremental BACKUP 1
zfs send -vi TANK@SNAP TANK@NOW | zfs receive -v BACKUP/TANK
zfs send -vi TANK/DATASET1@SNAP TANK/DATASET1@NOW | zfs receive -v BACKUP/TANK
zfs send -vi TANK/DATASET2@SNAP TANK/DATASET2@NOW | zfs receive -v BACKUP/TANK

Incremental BACKUP 2
zfs send -vi TANK@SNAP TANK@NOW | zfs receive -v BACKUP/TANK
zfs send -vi TANK/DATASET3@SNAP TANK/DATASET1@NOW | zfs receive -v BACKUP/TANK

Can I split a pool like this? Have I got the right targets?

AFAIK this type of thing isn't a common use case since ZFS was originally aimed at enterprise use. I've looked around and haven't been able to find anything readable on replication. If anyone has seen something that would be helpful I'd appreciate a reference.

Unfortunately I don't have spare hardware to test on, so I'm forced to work on a live system, so anything that will reduce the amount of trail and error would be much appreciated. My workstation doesn't have enough RAM that I can virtualize a FreeNAS installation.

If I understood correctly it's possible to create pools & vdevs in files. Is there an easy way to create JUNKPOOL with several datasets and JUNKBACKUP in a file or zvol or something that I could make a miniature mockup to do testing? I've found there is often no substitute for practice but I don't have a test setup I can afford to destroy.

Any comments, suggestions or advice is much appreciated.

----END

P.S. If anybody besides me cares about this type of thing, I'm happy to share what I find (and maybe even some code - don't always promise to write pure POSIX some of the bash extensions save a ton of work.) I'll let replies be my guide since it takes time to post stuff.

I don't know if I'm right or not, but there don't seem to be a lot of good backup alternatives for the "home/small office" for backup and I've got to cobble something together myself. Even rsync is starting to struggle a bit with the huge number of files encoundered with todays giant disks
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Did you decide upon a solution?

Haven't had time to figure it out... got as far as creating a couple of nested datasets and a single drive ZFS pool on a flash drive to play with. That's about it so far.
 

Magnetz

Dabbler
Joined
Jun 6, 2016
Messages
15
Will @SNAP1 & @SNAP2 on TANK (or the backups) grow significantly as a result of changes in the child datasets?
Can I restore DATASET1,2,3 without reference to a backup snapshot of the root TANK?

1. A small dataset with large child datasets will produce a small snapshot unless you use snapshot -r (which will snapshot the children). I have a media dataset with photos/videos/music as child datasets that actually hold the data and the snapshot of the root media dataset is small.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I need to do the same thing with a pool structure like this (but split it up into multiple disks due to size):

TANK [HANDFULL OF FILES-I COULD EASILY CREATE ANOTHER DATASET]
TANK/DATASET1 [3TB] ->DISK1
TANK/DATASET2 [4TB] ->DISK1
TANK/DATASET3 [6TB] ->DISK2

I'm assuming that backing up TANK (the root of the pool) on both drives would facilitate (or may even be necessary as a target for ) backup.

When you snapshot TANK, it's only going to snap the files/directories in TANK. It will not include anything in the child datasets. So it's gonna grab those handful of files you mention. Just depends on how you want to organize it. It might be cleaner if you create DATASET4 for the handful of files.

If you use the -r option (recursive) option on the snapshot of TANK it will snap TANK and all child datasets. Which I don't think you want to do if you are splitting the sends to different disks.

What about restoration?

The backed up snapshots of TANK will likely be at different times for BACKUP 1 & 2-I.e. @SNAP would actually be @SNAP1 @SNAP2.
As long as I keep both @SNAP1 & @SNAP2 on TANK will that be a problem?
Will @SNAP1 & @SNAP2 on TANK (or the backups) grow significantly as a result of changes in the child datasets?
Can I restore DATASET1,2,3 without reference to a backup snapshot of the root TANK?

I'm not exactly sure what you meant. But as long as the snapshot you created (at whatever time) exists on TANK you can sent it. Should not be a problem if they are at different times.

Once a snapshot is created it is fixed and won't change. So no, it will not grow. The dataset itself would grow with changes/adds. And those changes would be captured in the next snapshot you make. So if you have a snapshot that is DATASET1@Tuesday, and you then added files on Wednesday those files would be captured in a snapshot DATASET1@Thursday.


My best guess at how to proceed:

For ease of illustration I'm just calling the BACKUP pool BACKUP.
(I actually plan on having the name changed automatically to BACKUP01, BACKUP02, etc, but only one drive will be mounted at any time, so SNAP and NOW will likely translate into script variables.

I had a lot of difficulty figuring out exactly what target would receive what snapshot, but I'm thinking the correct way to start the process is to create initial snapshots like this:

zfs snapshot TANK@SNAP
zfs snapshot TANK/DATASET1@SNAP
zfs snapshot TANK/DATASET2@SNAP
zfs snapshot TANK/DATASET3@SNAP

Create BACKUP 1
zfs snapshot TANK@SNAP | zfs receive BACKUP
zfs send zfs -vi snapshot TANK/DATASET1@SNAP | zfs receive BACKUP
zfs send zfs -vi snapshot TANK/DATASET2@SNAP | zfs receive BACKUP

Create BACKUP 2
zfs snapshot TANK@SNAP | zfs receive BACKUP
zfs send zfs -vi snapshot TANK/DATASET3@SNAP | zfs receive BACKUP

Should work I think. But I guess this is why i was a bit confused. Is the "BACKUP 2" going to a separate disk? If so, it's confusing because the receiving pool is also called BACKUP (which is ok I think).

What do I do to send the incremental snapshots? Will this work:
Incremental BACKUP 1
zfs send -vi TANK@SNAP TANK@NOW | zfs receive -v BACKUP/TANK
zfs send -vi TANK/DATASET1@SNAP TANK/DATASET1@NOW | zfs receive -v BACKUP/TANK
zfs send -vi TANK/DATASET2@SNAP TANK/DATASET2@NOW | zfs receive -v BACKUP/TANK

Incremental BACKUP 2
zfs send -vi TANK@SNAP TANK@NOW | zfs receive -v BACKUP/TANK
zfs send -vi TANK/DATASET3@SNAP TANK/DATASET1@NOW | zfs receive -v BACKUP/TANK

Should work, but you will have two copies of the files in TANK that are not in child datasets. Unless you want two copies you could eliminate the send of TANK@NOW on BACKUP 2.



Can I split a pool like this? Have I got the right targets?
AFAIK this type of thing isn't a common use case since ZFS was originally aimed at enterprise use. I've looked around and haven't been able to find anything readable on replication. If anyone has seen something that would be helpful I'd appreciate a reference.

Unfortunately I don't have spare hardware to test on, so I'm forced to work on a live system, so anything that will reduce the amount of trail and error would be much appreciated. My workstation doesn't have enough RAM that I can virtualize a FreeNAS installation.

The short answer is "yes". I think it would help if you think of datasets independently. That was why I made the suggestion above about creating DATASET4 for your "other" files that exist in TANK.

And if a snapshot is a snapshot in time of a dataset that is also independent, then one can see how it's easy to move data around. You are just transferring datasets captured in time (so you have a known quantity) with snapshots. Given that, yes, you can send those independent datasets to whatever pool or file you want. (Yes, you can send the snapshot to a file on a disk, it doesn't have to be another pool. If that works easier for you. But you would still have to restore by receiving the file into a pool of course.)



If I understood correctly it's possible to create pools & vdevs in files. Is there an easy way to create JUNKPOOL with several datasets and JUNKBACKUP in a file or zvol or something that I could make a miniature mockup to do testing? I've found there is often no substitute for practice but I don't have a test setup I can afford to destroy.

You don't need to create a test POOL. Given the datasets are all independent anyway, you can just create test DATASETS. Then send to backup and practice a restore. You can put a dummy file(s) in each DATASET if you want. But I see no reason testing like this would take up a bunch of space. And you won't ruin your existing datasets as long as you are operating with only the JUNKDATASETx ones.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks @toadman for taking the time to reply.... it's really tough communicating exactly what is wanted/what the problem is in a forum post without it getting really long, and having it be clear. From responding to your reply, I've put a TLDR; revised simplified question at the bottom.

When you snapshot TANK, it's only going to snap the files/directories in TANK. It will not include anything in the child datasets. So it's gonna grab those handful of files you mention. Just depends on how you want to organize it. It might be cleaner if you create DATASET4 for the handful of files.

If you use the -r option (recursive) option on the snapshot of TANK it will snap TANK and all child datasets. Which I don't think you want to do if you are splitting the sends to different disks.
Thanks... I created my initial backups this way because everything fit on one disk. I no longer have that as an option, so I actually have to learn how this stuff works.

I'm not exactly sure what you meant. But as long as the snapshot you created (at whatever time) exists on TANK you can sent it. Should not be a problem if they are at different times.

Once a snapshot is created it is fixed and won't change. So no, it will not grow. The dataset itself would grow with changes/adds. And those changes would be captured in the next snapshot you make. So if you have a snapshot that is DATASET1@Tuesday, and you then added files on Wednesday those files would be captured in a snapshot DATASET1@Thursday.
Maybe I wasn't clear about what I meant by growing.

DATASET1@MONDAY
(Add 1G of Files)
DATASET1@TUESDAY
(Delete files from Monday-Add another 1G of Files
DATASET1@WEDNESDAY

-r DATASET1 is 2G bigger than it was on Monday... +1G for Monday, which is protected in @TUESDAY + the 1G for Tuesday which is protected in @WEDNESDAY. However DATASET1 has only grown by 1G +1-1+1, but the backup with all the snapshots will still retain the deleted data as will DATASET1 until the snapshot is deleted. I've got to deal with managing the size of the backup pool. Over time, I will have to remove snapshots from both the original and the backup pools. I need to understand exactly what I need to keep to be able to restore.

Should work I think. But I guess this is why I was a bit confused. Is the "BACKUP 2" going to a separate disk? If so, it's confusing because the receiving pool is also called BACKUP (which is ok I think).
Sorry for the confusion... pool backup is a single disk pool in a hot swap bay, so even if the name is the same it's a totally separate pool.

Should work, but you will have two copies of the files in TANK that are not in child datasets. Unless you want two copies you could eliminate the send of TANK@NOW on BACKUP 2.

The short answer is "yes". I think it would help if you think of datasets independently. That was why I made the suggestion above about creating DATASET4 for your "other" files that exist in TANK.

And if a snapshot is a snapshot in time of a dataset that is also independent, then one can see how it's easy to move data around. You are just transferring datasets captured in time (so you have a known quantity) with snapshots. Given that, yes, you can send those independent datasets to whatever pool or file you want. (Yes, you can send the snapshot to a file on a disk, it doesn't have to be another pool. If that works easier for you. But you would still have to restore by receiving the file into a pool of course.)
I agree with you about keeping TANK empty and putting the files somewhere else - either in an existing or new dataset.

What I don't understand is how to send a dataset independent of the parent. If I was copying a file /mnt/TANK/DATASET1 /mnt/BACKUP/ would copy a file DATASET1, and what path it came from wouldn't matter. I couldn't figure out how to do that-the dataset receive the snapshot had to be exactly the same name as the source-or maybe I was doing something wrong.

You don't need to create a test POOL. Given the datasets are all independent anyway, you can just create test DATASETS. Then send to backup and practice a restore. You can put a dummy file(s) in each DATASET if you want. But I see no reason testing like this would take up a bunch of space. And you won't ruin your existing datasets as long as you are operating with only the JUNKDATASETx ones.
The datasets are way to big for "casual testing", and I don't want to take any risks with real data. I can't afford to wipe out TANK, it's "WAY TOO MUCH WORK TO RESTORE IT", and the backups take way too long to run.

REVISED TLDR; QUESTION
assuming I have: (tinyjunk contains no files)
TANK/tinyjunk
TANK/tinyjunk/TJDS1
TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS3

I'd like to back up TJDS1,TJDS2 and TJDS3 on 3 separate removable pools (all called BACKUP-i.e. Only one is mounted at any one time.)

TANK/tinyjunk@BK01
TANK/tinyjunk/TJDS1@BK01
TANK/tinyjunk/TJDS2@BK01
TANK/tinyjunk/TJDS3@BK01

What command(s) would I use to make those back ups?

then if I create TANK/tinyjunk/TJDS2@BK02
how would I "add" to that backup

Do I even need to back up TANK/tinyjunk?
If I lost it, would just doing a zfs create TANK/tinyjunk allow me to restore, or would it fail because it has a different guid that the original dataset?

Then how would I restore TANK/tinyjunk/TJDS2@BK01 or TANK/tinyjunk/TJDS2@BK02 (Would the command be the same with just the 01 changed to 02)

EDIT:
I created the datasets as above, and then a single pool volume on a usb drive called USBACK and tried this:

Code:
#>zfs send -R -i  TANK/tinyjunk/TJDS2@BK01 | zfs recv USBACK
missing snapshot argument
usage:
		send [-DnPpRvLec] [-[iI] snapshot] <snapshot>
		send [-Le] [-i snapshot|bookmark] <filesystem|volume|snapshot>
		send [-nvPe] -t <receive_resume_token>

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
cannot receive: failed to read from stream
I have no clue what to do. I read the man page, but it doesn't have enough context to be meaningful.

Any assistance with this is much appreciated.
 
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Last edited:

NASbox

Guru
Joined
May 8, 2012
Messages
644
I think I have had that very last problem too? Take a look here please? https://forums.freenas.org/index.ph...a-10tb-transfer-disk.59852/page-2#post-435219
Your post gave me a hint, so I did a bit of experimentation and managed to work out a proof of concept for the above case:
  1. Back up a bunch of datasets to a removable pool (USBACK a USB Thumb drive)
  2. Backup another unrelated dataset
  3. Prune some of the backups off the USB Backup (USBACK)
  4. Destroy the root dataset tinyjunk
  5. Manually recreate the root dataset tinyjunk (assumed it wasn't backed up)
  6. Restore the backups of some of the datasets on the recreated dataset tinyjunk.
For the benefit of anyone else who is trying to understand snapshot replication, I am going to post an edited transcript of my shell session showing how the backup/restore operations where performed.

The TLDR; on the solution is that using -e on zfs receive allows datasets to be replicated independently from the parent.

Here's the code:
Code:
#>zfs send  -v   TANK/tinyjunk/TJDS2@BK01 | zfs receive -ve USBACK
full send of TANK/tinyjunk/TJDS2@BK01 estimated size is 40.1K
total estimated size is 40.1K
TIME		SENT   SNAPSHOT
receiving full stream of TANK/tinyjunk/TJDS2@BK01 into USBACK/TJDS2@BK01
received 50.7KB stream in 1 seconds (50.7KB/sec)
#>
#>zfs send  -v -i TANK/tinyjunk/TJDS2@BK01  TANK/tinyjunk/TJDS2@BK02 | zfs receive -ve USBACK
send from @BK01 to TANK/tinyjunk/TJDS2@BK02 estimated size is 37.6K
total estimated size is 37.6K
TIME		SENT   SNAPSHOT
receiving incremental stream of TANK/tinyjunk/TJDS2@BK02 into USBACK/TJDS2@BK02
received 21.3KB stream in 1 seconds (21.3KB/sec)
#>
USBACK														   528K  25.7G	88K  /mnt/USBACK
USBACK/TJDS2													 152K  25.7G	96K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK01												 56K	  -	92K  -
USBACK/TJDS2@BK02												   0	  -	96K  -
---
#>zfs send  -v -R  TANK/tinyjunk/TJDS1@BK01 | zfs receive -ve USBACK
full send of TANK/tinyjunk/TJDS1@BK01 estimated size is 37.1K
total estimated size is 37.1K
TIME		SENT   SNAPSHOT
receiving full stream of TANK/tinyjunk/TJDS1@BK01 into USBACK/TJDS1@BK01
received 46.6KB stream in 1 seconds (46.6KB/sec)
---
USBACK														   616K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1													  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01												   0	  -	88K  -
USBACK/TJDS2													 152K  25.7G	96K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK01												 56K	  -	92K  -
USBACK/TJDS2@BK02												   0	  -	96K  -
---
#>zfs send  -v -R  TANK/jails@auto-20171229.1433-2w  | zfs receive -ve USBACK
skipping dataset TANK/jails/.warden-template-pluginjail-11.0-x64: snapshot auto-20171229.1433-2w does not exist
full send of TANK/jails@auto-20171229.1433-2w estimated size is 12.1K
total estimated size is 12.1K
TIME		SENT   SNAPSHOT
receiving full stream of TANK/jails@auto-20171229.1433-2w into USBACK/jails@auto-20171229.1433-2w
received 46.6KB stream in 1 seconds (46.6KB/sec)
---
USBACK														   716K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1													  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01												   0	  -	88K  -
USBACK/TJDS2													 152K  25.7G	96K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK01												 56K	  -	92K  -
USBACK/TJDS2@BK02												   0	  -	96K  -
USBACK/jails													  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w								  0	  -	88K  -
---
#>zfs list -t all -r TANK/tinyjunk/TJDS2
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk/TJDS2		324K  18.4T   205K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK01   119K	  -   196K  -
TANK/tinyjunk/TJDS2@BK02	  0	  -   205K  -
#>zfs list -t all -r TANK/tinyjunk/TJDS2 >tjunkds3.txt
#>zfs snap TANK/tinyjunk/TJDS2@BK03
#>zfs list -t all -r TANK/tinyjunk/TJDS2
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk/TJDS2		452K  18.4T   213K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK01   119K	  -   196K  -
TANK/tinyjunk/TJDS2@BK02   119K	  -   205K  -
TANK/tinyjunk/TJDS2@BK03	  0	  -   213K  -
---
#>zfs send  -v -i TANK/tinyjunk/TJDS2@BK02  TANK/tinyjunk/TJDS2@BK03 | zfs receive -ve USBACK
send from @BK02 to TANK/tinyjunk/TJDS2@BK03 estimated size is 27.1K
total estimated size is 27.1K
TIME		SENT   SNAPSHOT
receiving incremental stream of TANK/tinyjunk/TJDS2@BK03 into USBACK/TJDS2@BK03
received 11.6KB stream in 1 seconds (11.6KB/sec)
---
#>zfs list -t all -r TANK/tinyjunk/TJDS2 >tjunkds4.txt
#>cat tjunkds4.txt
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk/TJDS2		452K  18.4T   213K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK01   119K	  -   196K  -
TANK/tinyjunk/TJDS2@BK02   119K	  -   205K  -
TANK/tinyjunk/TJDS2@BK03	  0	  -   213K  -
#>zfs destroy USBACK/TJDS2@BK01
---
#>zfs list -t all -r TANK/tinyjunk/TJDS2
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk/TJDS2		580K  18.4T   222K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK01   119K	  -   196K  -
TANK/tinyjunk/TJDS2@BK02   119K	  -   205K  -
TANK/tinyjunk/TJDS2@BK03   119K	  -   213K  -
---
#>zfs snap TANK/tinyjunk/TJDS2@BK04
#>zfs list -t all -r TANK/tinyjunk/TJDS2
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk/TJDS2		580K  18.4T   222K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK01   119K	  -   196K  -
TANK/tinyjunk/TJDS2@BK02   119K	  -   205K  -
TANK/tinyjunk/TJDS2@BK03   119K	  -   213K  -
TANK/tinyjunk/TJDS2@BK04	  0	  -   222K  -
---
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   720K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 156K  25.7G   100K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK02					 56K	  -	96K  -
USBACK/TJDS2@BK03					   0	  -   100K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
---
#>zfs send  -v -i TANK/tinyjunk/TJDS2@BK03  TANK/tinyjunk/TJDS2@BK04 | zfs receive -ve USBACK
send from @BK03 to TANK/tinyjunk/TJDS2@BK04 estimated size is 27.1K
total estimated size is 27.1K
TIME		SENT   SNAPSHOT
receiving incremental stream of TANK/tinyjunk/TJDS2@BK04 into USBACK/TJDS2@BK04
received 12.3KB stream in 1 seconds (12.3KB/sec)
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   816K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 216K  25.7G   104K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK02					 56K	  -	96K  -
USBACK/TJDS2@BK03					 56K	  -   100K  -
USBACK/TJDS2@BK04					   0	  -   104K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
---
#>zfs destroy USBACK/TJDS2@BK02
#>zfs list -t all -r TANK/tinyjunk/TJDS2 >tjunkds5.txt
#>zfs snap TANK/tinyjunk/TJDS2@BK05
#>zfs send  -v -i TANK/tinyjunk/TJDS2@BK04  TANK/tinyjunk/TJDS2@BK05 | zfs receive -ve USBACK
send from @BK04 to TANK/tinyjunk/TJDS2@BK05 estimated size is 27.1K
total estimated size is 27.1K
TIME		SENT   SNAPSHOT
receiving incremental stream of TANK/tinyjunk/TJDS2@BK05 into USBACK/TJDS2@BK05
received 13.1KB stream in 1 seconds (13.1KB/sec)
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   820K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 220K  25.7G   108K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK03					 56K	  -   100K  -
USBACK/TJDS2@BK04					 56K	  -   104K  -
USBACK/TJDS2@BK05					   0	  -   108K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
---
>ls -la
total 46
drwxr-xr-x  2 root  wheel	  7 Feb  3 02:11 .
drwxr-xr-x  5 root  wheel	  6 Jan 17 02:00 ..
-rw-r--r--  1 root  wheel  10860 Jan 17 02:08 tjunkds2-f2.txt
-rw-r--r--  1 root  wheel   2715 Jan 17 02:02 tjunkds2.txt
-rw-r--r--  1 root  wheel	228 Feb  3 01:57 tjunkds3.txt
-rw-r--r--  1 root  wheel	277 Feb  3 02:01 tjunkds4.txt
-rw-r--r--  1 root  wheel	326 Feb  3 02:11 tjunkds5.txt
---
#>zfs destroy TANK/tinyjunk/TJDS2
cannot destroy 'TANK/tinyjunk/TJDS2': filesystem has children
use '-r' to destroy the following datasets:
TANK/tinyjunk/TJDS2@BK03
TANK/tinyjunk/TJDS2@BK04
TANK/tinyjunk/TJDS2@BK05
TANK/tinyjunk/TJDS2@BK02
TANK/tinyjunk/TJDS2@BK01
#>zfs destroy -r TANK/tinyjunk/TJDS2
cannot unmount '/mnt/TANK/tinyjunk/TJDS2': Device busy
#>cd ..
#>pwd
/mnt/TANK/tinyjunk
#>zfs destroy -rv TANK/tinyjunk/TJDS2
will destroy TANK/tinyjunk/TJDS2
#>zfs list -t all -r TANK/tinyjunk
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk			  682K  18.4T   188K  /mnt/TANK/tinyjunk
TANK/tinyjunk@back01	   119K	  -   188K  -
TANK/tinyjunk/TJDS1		188K  18.4T   188K  /mnt/TANK/tinyjunk/TJDS1
TANK/tinyjunk/TJDS1@BK01	  0	  -   188K  -
TANK/tinyjunk/TJDS3		188K  18.4T   188K  /mnt/TANK/tinyjunk/TJDS3
---
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   820K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 220K  25.7G   108K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK03					 56K	  -   100K  -
USBACK/TJDS2@BK04					 56K	  -   104K  -
USBACK/TJDS2@BK05					   0	  -   108K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
---
#>zfs send  -v -R USBACK/TJDS2@BK05  | zfs receive -ve TANK/tinyjunk
full send of USBACK/TJDS2@BK03 estimated size is 51.6K
send from @BK03 to USBACK/TJDS2@BK04 estimated size is 27.1K
send from @BK04 to USBACK/TJDS2@BK05 estimated size is 27.1K
total estimated size is 106K
TIME		SENT   SNAPSHOT
receiving full stream of USBACK/TJDS2@BK03 into TANK/tinyjunk/TJDS2@BK03
02:23:32   64.0K   USBACK/TJDS2@BK03
TIME		SENT   SNAPSHOT
TIME		SENT   SNAPSHOT
received 64.3KB stream in 1 seconds (64.3KB/sec)
receiving incremental stream of USBACK/TJDS2@BK04 into TANK/tinyjunk/TJDS2@BK04
received 12.3KB stream in 1 seconds (12.3KB/sec)
receiving incremental stream of USBACK/TJDS2@BK05 into TANK/tinyjunk/TJDS2@BK05
received 13.1KB stream in 1 seconds (13.1KB/sec)
#>zfs list -t all -r TANK/tinyjunk
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk			 1.12M  18.4T   188K  /mnt/TANK/tinyjunk
TANK/tinyjunk@back01	   119K	  -   188K  -
TANK/tinyjunk/TJDS1		188K  18.4T   188K  /mnt/TANK/tinyjunk/TJDS1
TANK/tinyjunk/TJDS1@BK01	  0	  -   188K  -
TANK/tinyjunk/TJDS2		469K  18.4T   230K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK03   119K	  -   213K  -
TANK/tinyjunk/TJDS2@BK04   119K	  -   222K  -
TANK/tinyjunk/TJDS2@BK05	  0	  -   230K  -
TANK/tinyjunk/TJDS3		188K  18.4T   188K  /mnt/TANK/tinyjunk/TJDS3
---
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   820K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 220K  25.7G   108K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK03					 56K	  -   100K  -
USBACK/TJDS2@BK04					 56K	  -   104K  -
USBACK/TJDS2@BK05					   0	  -   108K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
#>zfs snap TANK/tinyjunk/TJDS3@BK01
#>zfs send  -v -R  TANK/tinyjunk/TJDS3@BK01  | zfs receive -ve USBACK
full send of TANK/tinyjunk/TJDS3@BK01 estimated size is 37.1K
total estimated size is 37.1K
TIME		SENT   SNAPSHOT
receiving full stream of TANK/tinyjunk/TJDS3@BK01 into USBACK/TJDS3@BK01
received 46.6KB stream in 1 seconds (46.6KB/sec)
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   908K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 220K  25.7G   108K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK03					 56K	  -   100K  -
USBACK/TJDS2@BK04					 56K	  -   104K  -
USBACK/TJDS2@BK05					   0	  -   108K  -
USBACK/TJDS3						  88K  25.7G	88K  /mnt/USBACK/TJDS3
USBACK/TJDS3@BK01					   0	  -	88K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
---
#>zfs destroy -v -r TANK/tinyjunk
will destroy TANK/tinyjunk/TJDS1@BK01
will destroy TANK/tinyjunk/TJDS1
will destroy TANK/tinyjunk/TJDS2@BK05
will destroy TANK/tinyjunk/TJDS2@BK04
will destroy TANK/tinyjunk/TJDS2@BK03
will destroy TANK/tinyjunk/TJDS2
will destroy TANK/tinyjunk/TJDS3@BK01
will destroy TANK/tinyjunk/TJDS3
will destroy TANK/tinyjunk@back01
will destroy TANK/tinyjunk
cannot unmount '/mnt/TANK/tinyjunk': Device busy
#>pwd
/mnt/TANK/tinyjunk
#>cd ..
#>zfs destroy -v -r TANK/tinyjunk
will destroy TANK/tinyjunk
---
#>zfs list -t all -r TANK/tinyjunk
cannot open 'TANK/tinyjunk': dataset does not exist
#>zfs create TANK/tinyjunk
#>zfs list -t all -r TANK/tinyjunk
NAME			USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk   188K  18.4T   188K  /mnt/TANK/tinyjunk
---
#>zfs list -t all -r USBACK
NAME								 USED  AVAIL  REFER  MOUNTPOINT
USBACK							   908K  25.7G	88K  /mnt/USBACK
USBACK/TJDS1						  88K  25.7G	88K  /mnt/USBACK/TJDS1
USBACK/TJDS1@BK01					   0	  -	88K  -
USBACK/TJDS2						 220K  25.7G   108K  /mnt/USBACK/TJDS2
USBACK/TJDS2@BK03					 56K	  -   100K  -
USBACK/TJDS2@BK04					 56K	  -   104K  -
USBACK/TJDS2@BK05					   0	  -   108K  -
USBACK/TJDS3						  88K  25.7G	88K  /mnt/USBACK/TJDS3
USBACK/TJDS3@BK01					   0	  -	88K  -
USBACK/jails						  88K  25.7G	88K  /mnt/USBACK/jails
USBACK/jails@auto-20171229.1433-2w	  0	  -	88K  -
---
#>zfs send -v USBACK/TJDS1@BK01  | zfs receive -ve TANK/tinyjunk
full send of USBACK/TJDS1@BK01 estimated size is 37.1K
total estimated size is 37.1K
TIME		SENT   SNAPSHOT
receiving full stream of USBACK/TJDS1@BK01 into TANK/tinyjunk/TJDS1@BK01
received 46.6KB stream in 2 seconds (23.3KB/sec)
#>zfs send -v USBACK/TJDS2@BK05  | zfs receive -ve TANK/tinyjunk
full send of USBACK/TJDS2@BK05 estimated size is 52.6K
total estimated size is 52.6K
TIME		SENT   SNAPSHOT
receiving full stream of USBACK/TJDS2@BK05 into TANK/tinyjunk/TJDS2@BK05
received 67.5KB stream in 1 seconds (67.5KB/sec)
#>zfs send -v USBACK/TJDS3@BK01  | zfs receive -ve TANK/tinyjunk
full send of USBACK/TJDS3@BK01 estimated size is 37.1K
total estimated size is 37.1K
TIME		SENT   SNAPSHOT
receiving full stream of USBACK/TJDS3@BK01 into TANK/tinyjunk/TJDS3@BK01
received 46.6KB stream in 1 seconds (46.6KB/sec)
---
#>zfs list -t all -r TANK/tinyjunk
NAME					   USED  AVAIL  REFER  MOUNTPOINT
TANK/tinyjunk			  793K  18.4T   188K  /mnt/TANK/tinyjunk
TANK/tinyjunk/TJDS1		188K  18.4T   188K  /mnt/TANK/tinyjunk/TJDS1
TANK/tinyjunk/TJDS1@BK01	  0	  -   188K  -
TANK/tinyjunk/TJDS2		230K  18.4T   230K  /mnt/TANK/tinyjunk/TJDS2
TANK/tinyjunk/TJDS2@BK05	  0	  -   230K  -
TANK/tinyjunk/TJDS3		188K  18.4T   188K  /mnt/TANK/tinyjunk/TJDS3
TANK/tinyjunk/TJDS3@BK01	  0	  -   188K  -
Hope this helps, and I'm sure others may take this and expand on it.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Awesome! :)
 
Status
Not open for further replies.
Top