Restoring Pool from future Manual Backup on different Medium

Jannisberry

Cadet
Joined
Nov 8, 2023
Messages
4
Hi,
I have a pretty specific question. I have a plan of expanding my Home-Server. I currently have 3 12 TB Seagate drives and 2 unused 12 TB ones that i want to incorporate in the pool. I want to create a pool with 5 12 TB Drives in Z1. Tho to achieve that i will transfer all files to tape storage because i got 2 Drives and a Ton of Storage. I know ix-systems doesnt want you to do it but the Hardware is here and i have no more space in this case for any drives after this. My plan is now to copy everything on to tapes and after destroying and recreating the Pool to copy everything in the same structure back to the pool just with double the capacity. Now my question is how can i get back the config for the pool like datasets, shares and so on. Or do i have to recreate all of this afterwards? It is really important how i save the Data on the tapes. I cant really export the pool because there will be multiple tapes and the backup solution i am using is the tar command. What i could do is create a single archive and split it onto multiple tapes with tar but will this keep the configuration?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If your tape is capable of handling a single file, you can use ZFS Send to file to make an entire backup, including all datasets, snapshots, zVols and any non-default attributes.

Normally ZFS Send is piped to ZFS Receive, but their is nothing stopping your from creating one huge file;
zfs snapshot -r POOL@SNAPSHOT zfs send -Rpv POOL@SNAPSHOT >huge_backup.zfs

Then on restore;
zfs receive -dFuv POOL <huge_backup.zfs

The exact syntax to write to your tape is up to you. It can be via SSH if the tape drive is not local. For example;
zfs send -Rpv POOL@SNAPSHOT | ssh TAPE_SERVER dd bs=64k of=/dev/MY_TAPE_DRIVE ssh TAPE_SERVER dd bs=64k if=/dev/MY_TAPE_DRIVE | zfs receive -dFuv POOL

There are also commands to split a file up, like lxsplit. Though I don't know one that uses standard input.


All that said, to put it bluntly, unless you have a reasonable amount of skill with BOTH ZFS & Unix, you are likely not going to get this implemented without data loss.


Last, using RAID-Z1, (single parity), is not recommended for drives over 1TB - 2TBs. You can do so, if you have good backups.
 

Jannisberry

Cadet
Joined
Nov 8, 2023
Messages
4
So my tape drive is local to the machine. What i just understood is that i can make a snapshot like you mentioned and after this send it to the tape drive with a tar command like this:
zfs send -Rpv POOL@SNAPSHOT | tar -cfv --multi-volume --tape-length=780G /dev/st0

But i am unsure about the tar command it usually needs a specification to what it is meant to write at the end. With this it hopefully has this input from the send command.

If i read and write to to the volumes after i finishes one tape it takes a significant amount of time to rewind the tape so i hope zfs send/receive won't like timeout.

I do know that Z1 isn't recommended but i think for my application it is ok.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
So my tape drive is local to the machine. What i just understood is that i can make a snapshot like you mentioned and after this send it to the tape drive with a tar command like this:
zfs send -Rpv POOL@SNAPSHOT | tar -cfv --multi-volume --tape-length=780G /dev/st0

But i am unsure about the tar command it usually needs a specification to what it is meant to write at the end. With this it hopefully has this input from the send command.

If i read and write to to the volumes after i finishes one tape it takes a significant amount of time to rewind the tape so i hope zfs send/receive won't like timeout.
...
A ZFS Snapshot is immutable, (unchanging), for it's entire life, unless their is a hardware failure, like a disk block or entire disk. But, a ZFS Snapshot has the same redundancy level as the ZFS pool it lives on. Sending a recursive ZFS Snapshot will be perfectly fine, even between tape changes.

I don't know what TAR will want at end, but if it supports multiple tapes directly, then good.

As for the timeout, I don't think that will be an issue. BUT, you probably should run the command in a TMUX or Screen session.

...
I do know that Z1 isn't recommended but i think for my application it is ok.
Okay.
 

Jannisberry

Cadet
Joined
Nov 8, 2023
Messages
4
I don't know what TAR will want at end, but if it supports multiple tapes directly, then good.

As for the timeout, I don't think that will be an issue. BUT, you probably should run the command in a TMUX or Screen session.
A Tar command normally looks like this:
tar -cvf /dev/*tapedrive* /*file/directory*
so it want's some kind of file or directory to know where it should start and end a file in an archive. Which the zfs send command doesn't really outputs? I looked through some documentation and found it is more of a datastream which tar cannot handle at all. Therefore that this is looking to be a dead end i had another idea.

What i also just could do is copy everything that is too big to the Tapes and just delete that from the pool, save the smaller snapshot and send this to another computer which is way easier and a lot more efficient because i could use both of my tape drives simultaneously.

I would have loved to have a whole backup of the pool through zfs on tape though to at some point if it would be needed to just revive it, but with a headstart if something did happen.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I am not sure what the syntax would be. Looking at Linux TAR manual page, and some googling, I am not sure TAR will do what you want. Yes, it can create a 780GByte file and do tape changes, (from what I saw). But, I don't see a way to read from standard input for the process to read from the ZFS Send process.

Looking at a similar program, DAR, (Disk Archiver), it seems to have the same limitation. Even slightly worse because it wants to name the destination files differently. That kinda prevents using a tape device as a target. But, their may be options buried in the extensive manual page that I did not see.

Perhaps using named pipes;
mknod zfs_send p nohup zfs send -Rp POOL@SNAPSHOT >>zfs_send & tar -cvf /dev/*tapedrive* --multi-volume --tape-length=780G zfs_send

I've now exhausted my knowledge on the subject.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
RAIDZ expansion can't come soon enough...
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Jannisberry What was the issue with the original "dd" method from post #2?
Tape changes. The assumption I have is that the pool is larger than the 780GByte tape size. With TARs ability to pause for tape changes, that allows backing up a pool larger than a single tape cartridge. This should not hurt the ZFS Send because it is reading from a R/O ZFS Snapshot, and will simply freeze when it can't write more data.
 

Jannisberry

Cadet
Joined
Nov 8, 2023
Messages
4
So I've tried to look into this a little deeper on my own and cant seem to find a possibility that can write to multiple Tape drives from "one" file through stdi. Tho if you could have just the Snapshot as a file you could probably write it in full to multiple drives without the send command, i just don't have the Storage for twice my NAS. I will just try expanding how i had planned in mind beforehand. Thanks @Arwen for your help but i didn't think it would be this hard.
 
Top