Best way to backup a small pool?

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
I have a TrueNAS 12 pool running on an SSD, called services and used to store the jails:

1608664407673.png


What would be the best way to backup everything, in case the SSD breaks and I need to replace it with a new SSD? I currently run few official plugins with their own dedicated jails. Thank you.
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Can I re-invigorate your original question please because I need the answer too. The suggestion to go for replication on another system just to backup a ssd card seems a little... extreme. I too plan to put jails and 3 or 4 plugins into a 128gig SSD (and make it the system dataset too), where the main HDDs are 2 x 8 Tb data stores: more than sufficient room to keep a backup of a small SSD like that. But how? Can I backup or clone the SSD to the HDD pool in a way that it can easily be reinstated if/when the SSD dies? Or is a snapshot of the SSD sufficient & all I need to replace itvs system & jails & plugins when it dies?

Thanks for your thoughts.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Can I re-invigorate your original question please because I need the answer too. The suggestion to go for replication on another system just to backup a ssd card seems a little... extreme. I too plan to put jails and 3 or 4 plugins into a 128gig SSD (and make it the system dataset too), where the main HDDs are 2 x 8 Tb data stores: more than sufficient room to keep a backup of a small SSD like that. But how? Can I backup or clone the SSD to the HDD pool in a way that it can easily be reinstated if/when the SSD dies? Or is a snapshot of the SSD sufficient & all I need to replace itvs system & jails & plugins when it dies?

Thanks for your thoughts.
Follow the link that Kris Moore posted above and look at the 'local' sub-directory on the left-hand pane of the manual. It will tell you how to setup a replication task to replicate a snapshot to a second pool on the same system (e.g. ssd pool to hdd pool).

Edit: Additionally you will need to setup a snapshot task of the dataset or pool you're wanting to replicate, which is also in the manual.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,579
I prefer the terminal, thank you, much appreciated.

If you don't want to setup a Periodic Snapshot Task and Replication Task for iocage, and you only want to use the terminal:

Is iocage the only thing that exists under the services pool? (If your hidden .system dataset lives there, move it to another pool temporarily, like your boot pool.)

In theory, you could create your new, larger pool, and name it services_new. Stop all your jails/plugins, then rename the current services pool to sevices_old. Rename services_new to services. Then send-recv services_old/iocage to services/iocage. You may or may not have to run an additional command to make sure the jails will continue to operate like normal (which I think they should since you're using the same pool name of services. I forget the command, I'd have to look it up.)


The full send-recv looks something like this:
Code:
zfs snap -r services_old/iocage@migrate
zfs send -v -w -R services_old/iocage@migrate | zfs recv -v -d -F services



...or something like that. You can move the .system dataset back to the services pool now, if that's how you originally had it.

Someone who has migrated jails like this before might have better input.

EDIT: I just noticed the time-stamps on these posts. :tongue:
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Follow the link that Kris Moore posted above and look at the 'local' sub-directory on the left-hand pane of the manual. It will tell you how to setup a replication task to replicate a snapshot to a second pool on the same system (e.g. ssd pool to hdd pool).

Edit: Additionally you will need to setup a snapshot task of the dataset or pool you're wanting to replicate, which is also in the manual.
Thank you for that exact & specific clarification! So just to now clarify my own understand, 'replication' isn't saving any other sort of data other than a snapshot (or series of regular snapshots) then? So it seems no different from going to Tasks / Periodic Snapshot tasks and setting it up there then? I must be missing something...

Also, when a SSD dies and a new one is inserted, will a) the system come up, with no System and no Jails specified, and b) if it does, how do I 'apply' or 'load' or 'restore' (whatever the terminology is) the Snapshot in order to 'clone' the lost SSD's setup to the new SSD?

Thank you once again.
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Anyone?
 
Joined
Oct 22, 2019
Messages
3,579
So just to now clarify my own understand, 'replication' isn't saving any other sort of data other than a snapshot (or series of regular snapshots) then? So it seems no different from going to Tasks / Periodic Snapshot tasks and setting it up there then? I must be missing something...
I'm not sure I understand the question as you intended?

Creating snapshots (manual or "periodic") is different than creating a Replication Task.

When you create a snapshot, whether manually or automatically via Periodic Snapshots, it makes a reference to the dataset(s) of that moment in time: the files that exist, the files that don't exist, and down to the very records of file modifications and metadata (file name, file location, directory name, etc)

A replication, whether manually or automatically via a Replication Task, is when a snapshot(s) is sent over to another destination, whether local or remote.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
I'm revisiting this, since I need to replace the SSD where services/iocage dataset resides. Can anyone post a quick list of shell commands allowing me to create a snapshot, destroy the dataset, replace the disk with a bigger one and restore the snapshot? I prefer the terminal, thank you, much appreciated.
The snapshot(s) live in the same place as the dataset. They are not some kind of magical backup that is stored in an extra location. So if you create a snapshot, then destroy the dataset, the dataset and all snapshots are gone.
You need to create a snapshot, replicate that snapshot by the means of zfs send ... | zfs receive ... to a different location, then replace your SSD (and as I read it create a completely new pool) and then restore the snapshot by the same command, just the other way round.

Actually the zfs receive ... is optional. You can store a snapshot (the whole dataset at that point in time, actually) in a regular file:
Code:
zfs snapshot <pool>/<dataset>@now
zfs send <pool>/<dataset>@now > /some/path/with/space/mysnapshot


Then to restore:
Code:
zfs receive <pool>/<dataset> </some/path/with/space/mysnapshot


You need to do this for all datasets and sub-datasets of your jails individually. There are "recursive" flags to the snapshot as well as to the "send/receive" commands, though. I refer to the documentation for now.

Most important takeaway for @TECK and @NumberSix: the snapshots are stored in the pool/dataset. If you destroy the pool by exchanging your SSD you won't have any snapshots. They are not magically saved some place else.
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
I'm not sure I understand the question as you intended?

Creating snapshots (manual or "periodic") is different than creating a Replication Task...
Thank you Winnie! You're separating the woods from the trees for me here. Much appreciated!
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Patrick. You are a star.
I'm getting there with my understanding, one shakey step at a time - thanks to help from the very generous people on this forum, like you for one excellent example. Yes your detailed explanation is extremely appreciated. I might need to copy/paste such nuggets and make myself a physical notebook / uber crib sheet to help me through the tougher slopes of this learning curve. Again, thank you so much for rendering the opaque so crystal clear. Bravo sir!
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Actually the zfs receive ... is optional. You can store a snapshot (the whole dataset at that point in time, actually) in a regular file:
Code:
zfs snapshot <pool>/<dataset>@now
zfs send <pool>/<dataset>@now > /some/path/with/space/mysnapshot


Then to restore:
Code:
zfs receive <pool>/<dataset> </some/path/with/space/mysnapshot

Hi Patrick
Excuse me for revisiting this, but when it came to actually trying to implement these commands I got into a mess. Can I ask for clarification and perhaps a literal example?
So, clarification. When you say 'you can store a snapshot in a regular file' - what is a snapshot, if not a file, somewhere? Relatedly, where are they stored and what are their names? For example, the snapshots I've taken so far are of the (I don't have all the correct terminology yet) 'root' entity to which all the datasets attach, marking it 'recursive'. It seems to be the best way of proceeding to make a snapshot that is all embracing.

Now, I have a SSD which is named ada0, and a paid of mirrored hard drives which are ada1 and ada2. In logical terms I name then 'System' and 'NAS', respectively.
So, if I type
Code:
zfs snapshot <pool>/<dataset>@now

then (not knowing where that file gets saved or what it's name is but let's say for example /dev/mnt/SSD/snapshots/snapshot001.snap), would my implementation of your example commands become:

Code:
zfs send  /dev/mnt/SSD/snapshots/snapshot001.snap /some/path/with/space/mysnapshot


or would it be

Code:
zfs send  NAS/(some way of saying 'all child datasets'/snapshot001.snap) /some/path/with/space/mysnapshot


Lastly, the restore command you mention - does that merely move the snapshot file(s) between drives, or does it move and load the snapshot?

As you see, I am very much floundering in getting your template example translated into a concrete example here. Any help would be very much appreciated! Thank you in anticipation.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
@NumberSix Have you read the ZFS primer? If not, you should.

ZFS is a copy-on-write filesystem. So it never overwrites data in place. It writes new data, then re-arranges internal data structures to point to the new data (the re-written data structures are also written to new locations ...) and finally releases the old blocks that are not referenced anymore as "free".

A snapshot does not reside in any particular file. It is part of the ZFS dataset in question. A snapshot means that the old data is not released for future overwriting but kept - with appropriate management information pointing to it.

So I have this dataset:
Code:
root@freenas[~]# zfs list -r -t all hdd/scripts
NAME          USED  AVAIL     REFER  MOUNTPOINT
hdd/scripts  1.81M  4.44T     1.81M  /mnt/hdd/scripts


Now I take a snapshot:
Code:
root@freenas[~]# zfs snapshot hdd/scripts@2359
root@freenas[~]# zfs list -r -t all hdd/scripts
NAME               USED  AVAIL     REFER  MOUNTPOINT
hdd/scripts       1.81M  4.44T     1.81M  /mnt/hdd/scripts
hdd/scripts@2359     0B      -     1.81M  -


This snapshot is just a reference to the current state of the dataset. I can - if I like - rollback to it and undo any changes:
Code:
root@freenas[~]# touch /mnt/hdd/scripts/foobarbaz
root@freenas[~]# ll /mnt/hdd/scripts/foobarbaz
-rw-r--r--  1 root  wheel  uarch 0 Apr 29 00:01 /mnt/hdd/scripts/foobarbaz
root@freenas[~]# zfs rollback hdd/scripts@2359
root@freenas[~]# ll /mnt/hdd/scripts/foobarbaz
ls: /mnt/hdd/scripts/foobarbaz: No such file or directory


So the rollback returned the dataset to the state it had before I created that empty file.

Now to save a snapshot in a real file, that you can copy off your NAS and store somewhere else, do something like this:
Code:
root@freenas[~]# zfs send hdd/scripts@2359 >/mnt/hdd/share/archiv/hdd-scripts@2359
root@freenas[~]# ll /mnt/hdd/share/archiv/hdd-scripts@2359
-rw-r--r--  1 root  nogroup  uarch 3617384 Apr 29 00:04 /mnt/hdd/share/archiv/hdd-scripts@2359


So now there is a file in my "archiv" dataset that contains all the data of the "scripts" dataset at the time I took the snapshot.

Now imagine your entire NAS goes *poof* but you have this file on some medium. You set up a new ZFS capable system and copy the file over somehow. To restore:
Code:
root@freenas[~]# zfs receive hdd/scripts-restored < /mnt/hdd/share/archiv/hdd-scripts@2359
root@freenas[~]# ll /mnt/hdd/scripts-restored
total 110
drwxr-xr-x  3 root  wheel  uarch     9 Mar 19 23:39 ./
drwxr-xr-x  8 root  wheel  uarch     8 Apr 29 00:07 ../
-rwxr-xr-x  1 root  wheel  uarch   227 Jan 15  2020 backup-config.sh*
drwxr-xr-x  2 root  wheel  uarch    32 Apr 29 00:00 config/
-rwxr-xr-x  1 root  wheel  uarch 56599 Jan 11  2020 disklist.pl*
-rwxr-xr-x  1 root  wheel  uarch   117 May 29  2020 nvme-power.sh*
-rwxr-xr-x  1 root  wheel  uarch   795 Apr 14 16:50 nvme-wear.sh*
-rwxr-xr-x  1 root  wheel  uarch   276 Jan 11  2020 shutdown-bhyve.sh*
-rwxr-xr-x  1 root  wheel  uarch   709 Jan 31 23:12 zpool-metrics.sh*


Now /mnt/hdd/scripts-restored contains exactly the data of /mnt/hdd/scripts at the time I took that snapshot.


BTW: note how I use ZFS dataset names ("pool/path/path") for all the ZFS operations, but real file paths ("/mnt/path/path/file") when referencing a file.


HTH,
Patrick
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,455
I'm not sure I can answer all your questions, but maybe I can help a bit. A snapshot is stored in the metadata of a dataset, and in its simplest form the data stored consists only of the name of the snapshot and the "transaction group" (effectively, a timestamp) when it was created--they are therefore nearly instantaneous and take almost no space. The name consists of the ZFS path of the dataset, the symbol @, and an arbitrary name. So a snapshot called "now" of the dataset "foo" on the pool "tank" would be tank/foo@now. To create that snapshot, you'd run zfs snapshot tank/foo@now. If you then wanted to save that snapshot into a file (perhaps to back up onto some other non-ZFS filesystem), you'd run zfs send tank/foo@now > /mnt/tank/mysnapshot--now it's an ordinary file and you can do whatever you want with it.
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Hi Patrick
That was immensely helpful. Thank you!

This is an edit to an earlier version of this post, made in the belief/hope you haven't seen the first draft yet.

I read the Primer since yesterday! Very interesting indeed! I know next to nothing about Linux or command line instructions in Linux, ZFS or FreeBSD, so this is a near-vertical learning curve for me. That Primer was very useful as a place to start therefore. Thank you.

One detail you confused me with (or I confused myself). When restoring the snapshot file, in one post you used zfs receive, while in another place you did something more complicated involving zfs rollback. I wonder if you could unpack both of those approaches and elaborate on what's going on in each case. If it helps simplify what I'm asking you, I would only anticipate a situation where I might need to recover a snapshot stored on a separate device back to a possibly damaged origin device. If I got that far, I imagine I'd then use the GUI to restore the retrieved snapshot.
Thank you.

Footnote:
By way of safeguarding snapshots (which is my ultimate goal here), if I made the destination for the zfs send a folder which was also a samba share (see where I'm going?!), could I save a copy of that folder to Windows 10? Then if it was ever needed, restore it back to that samba share for restoring - assuming none of the required permissions got destroyed by Windows that is? Could that work?
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,455
When one runs something like "zfs snapshot Pool/Dataset@whenever" - where does that file (or "entity / collection of data") get stored (prior to any 'send')?
It gets stored, as I wrote, in the dataset's metadata. If you want to destroy it, zfs destroy pool/dataset@name.
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
It gets stored, as I wrote, in the dataset's metadata. If you want to destroy it, zfs destroy pool/dataset@name.
Thank you for that Danb35. I wonder, is there a sort of 'companion' command that would let me list all the eisting snapshots, so I could decide which I want to 'destroy'? If so, would it find both 'native' snapshots, and ones that had been turned into files via 'send'?

EDIT:
I just Googled around the topic of "zfs destroy" - according to what I read it doesn't just remove old snapshots; far from it :
"To destroy a ZFS file system, use the zfs destroy command. By default, all of the snapshots for the dataset will be destroyed. The destroyed file system is automatically unmounted and unshared."
and elsewhere:
"The destroy option of the zfs command unshares, unmounts, and obliterates filesystems".
Spectacularly destructive!!! Hardly what I'm after here- pruning some old snapshots, or have I misunderstood this?

Thank you.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,455
is there a sort of 'companion' command that would let me list all the eisting snapshots
zfs list -t snapshot
"zfs destroy" - according to what I read it doesn't just remove old snapshots; far from it :
Yes, it can do a number of things. Removing snapshots is one of those things.
 

NumberSix

Contributor
Joined
Apr 9, 2021
Messages
188
Ahh.
I don't think I'll be using an atomic bomb to crack a small hazelnut just yet then. Unless I can find a safer command, I'll stick to the GUI for that operation.
 
Top