ghostwolf59
Contributor
- Joined
- Mar 2, 2013
- Messages
- 165
Hi guys,
Sorry for somewhat repeating this questions, but narrowing down my questions moving from a UFS storage setup where I traditionally manually taken backups using RSYNC onto secondary media where I now want to move towards a similar setup using zfs volumes and datasets I now (I think
), I managed to get a bash script together that does this - would like to hear from you guys what you think is right/wrong with this...
A couples of notes when it comes to this...
1. Generally I would have liked to simply call a function as part of an if statement, but when I do it don't seem to work (the return 1 or 0 don't seem to be picked up/recognized by the if statement i.e if func_foo <arg> then....
so I had to revert to the somewhat messy way of trapping the last return code from last comment in my script i.e
func_foo <arg>
status=$? #status from last function call
if [ $status == 0 ] ; then ....
2. When trying this out on a volume that holds my jails I noticed that the cleanup of the old snapshots failed due to a nested snapshot being created by jail (or other ....) i.e <volume>/jails/.warden-template-pluginjail@clean
if the "clean" snapshot exists then the root down destroy of snapshots fail (even though the return is success) - so what I am left with is a snapshot of <volume>/jails/.warden-template-pluginjail@clean that has dependencies to other plugins that I have installed - such as
<backupVolume>/<sourceVolume>-ccyymmdd/jails/sickbeard_2
<backupVolume>/<sourceVolume>-ccyymmdd/jails/plexmediaserver_1
<backupVolume>/<sourceVolume>-ccyymmdd/jails/firefly_1
Once these dependencies has been removed, I can then successfully destroy the parent
<backupVolume>/<sourceVolume>-ccyymmdd/jails/.warden-template-pluginjail@clean
So I tried to cater for this scenario that only would occur IF the JAIL backup volume is mounted where I attempt to backup the source volume and it's child datasets containing the jails - a massive fix that i found a bit weird
The aim is to transfer the source volume to a secondary device that I can mount at will with no intention to keep snapshots created on the source volume once the backup is complete.
so this is the bash script I have so far... The backup seem to work fine but I've never tried the restore (successfully)
The commend line syntax is...
backup: <shellscript> backup <sourceVolumeName>
restore: <shellscript> restore <sourceVolumeName> <date (ccyymmdd)>
The backup creates a full backup onto my secondary media (named BACKUP)
It has all data and snapshots of the volume (naming convention: BACKUP/<SourceVolumeName>-<ccyymmdd>
It also have the snapshots BACKUP@<SourceVolumeName>-<ccyymmdd>
so here's the complete backup/restore bash script... anything I ought to change...
btw. I am fully aware of the replication functions that comes with freenas, but from what I understand there is a couple of pre-req's
1. A scheduled snapshot task needs to be defined
2. once the scheduled task is in place you can do a replication, but that in turn seem to suggest that you always need to have your secondary media mounted (which in my case not is possible since since I want to back up all my drives and by scheduling a snapshot and replication this in turn suggests that the secondary replication media always is mounted which in turn causes issues since 1. i cant mount a secondary replication media permanently for each of my source drive, 2. the total space across all my source media exceeds the space on my backup (replication media(s))
so how wrong is this....? like I said, the backup seem to work just fine, just not sure about the restore where I would like to be able to recover all data and originally defined datasets in case of a complete hardware failure.
I've taken a big chance by progressing this before being 100% convinced it would work. my old UFS drives have now been converted to zfs's and I now need to confirm that I also can recover from a failure or data loss.
Worse case scenario I can at least use my new zfs backups to manually recover each dataset and its data, but I would like this to be somewhat automated.
Also not sure how I can use this script to recover by owerwriting the source destination volume - I think the restore could work on a new clean disk, but if the source destination volume already have datasets and data stored then I don't think this would work.
Using rsync have proven convenient since I simple could run this repeatedly and be confident that any changes would be replicated to from source/destination
Sorry for somewhat repeating this questions, but narrowing down my questions moving from a UFS storage setup where I traditionally manually taken backups using RSYNC onto secondary media where I now want to move towards a similar setup using zfs volumes and datasets I now (I think
A couples of notes when it comes to this...
1. Generally I would have liked to simply call a function as part of an if statement, but when I do it don't seem to work (the return 1 or 0 don't seem to be picked up/recognized by the if statement i.e if func_foo <arg> then....
so I had to revert to the somewhat messy way of trapping the last return code from last comment in my script i.e
func_foo <arg>
status=$? #status from last function call
if [ $status == 0 ] ; then ....
2. When trying this out on a volume that holds my jails I noticed that the cleanup of the old snapshots failed due to a nested snapshot being created by jail (or other ....) i.e <volume>/jails/.warden-template-pluginjail@clean
if the "clean" snapshot exists then the root down destroy of snapshots fail (even though the return is success) - so what I am left with is a snapshot of <volume>/jails/.warden-template-pluginjail@clean that has dependencies to other plugins that I have installed - such as
<backupVolume>/<sourceVolume>-ccyymmdd/jails/sickbeard_2
<backupVolume>/<sourceVolume>-ccyymmdd/jails/plexmediaserver_1
<backupVolume>/<sourceVolume>-ccyymmdd/jails/firefly_1
Once these dependencies has been removed, I can then successfully destroy the parent
<backupVolume>/<sourceVolume>-ccyymmdd/jails/.warden-template-pluginjail@clean
So I tried to cater for this scenario that only would occur IF the JAIL backup volume is mounted where I attempt to backup the source volume and it's child datasets containing the jails - a massive fix that i found a bit weird
The aim is to transfer the source volume to a secondary device that I can mount at will with no intention to keep snapshots created on the source volume once the backup is complete.
so this is the bash script I have so far... The backup seem to work fine but I've never tried the restore (successfully)
The commend line syntax is...
backup: <shellscript> backup <sourceVolumeName>
restore: <shellscript> restore <sourceVolumeName> <date (ccyymmdd)>
The backup creates a full backup onto my secondary media (named BACKUP)
It has all data and snapshots of the volume (naming convention: BACKUP/<SourceVolumeName>-<ccyymmdd>
It also have the snapshots BACKUP@<SourceVolumeName>-<ccyymmdd>
so here's the complete backup/restore bash script... anything I ought to change...
btw. I am fully aware of the replication functions that comes with freenas, but from what I understand there is a couple of pre-req's
1. A scheduled snapshot task needs to be defined
2. once the scheduled task is in place you can do a replication, but that in turn seem to suggest that you always need to have your secondary media mounted (which in my case not is possible since since I want to back up all my drives and by scheduling a snapshot and replication this in turn suggests that the secondary replication media always is mounted which in turn causes issues since 1. i cant mount a secondary replication media permanently for each of my source drive, 2. the total space across all my source media exceeds the space on my backup (replication media(s))
so how wrong is this....? like I said, the backup seem to work just fine, just not sure about the restore where I would like to be able to recover all data and originally defined datasets in case of a complete hardware failure.
I've taken a big chance by progressing this before being 100% convinced it would work. my old UFS drives have now been converted to zfs's and I now need to confirm that I also can recover from a failure or data loss.
Worse case scenario I can at least use my new zfs backups to manually recover each dataset and its data, but I would like this to be somewhat automated.
Also not sure how I can use this script to recover by owerwriting the source destination volume - I think the restore could work on a new clean disk, but if the source destination volume already have datasets and data stored then I don't think this would work.
Using rsync have proven convenient since I simple could run this repeatedly and be confident that any changes would be replicated to from source/destination
Code:
#!/bin/bash
#
# backup/restore syncronization
#
# https://forums.freenas.org/index.php?threads/zfs-send-to-external-backup-drive.17850/
#echo "Starting"
req="${1,,}" #convert to lower case request (backup or restore)
drive="${2^^}" #convert to upper case target/destination drive
dt="${3^^}" #convert to upper case date of a backup point to restore from
force="${3,,}" #convert flag setting to lower case
u="" #default force setting flag (rsync)
if [ "${force}" == "" ];
then
force=""
fi
back="BACKUP" #default name of backup volume
status=0
# run if user hits control-c
function control_c() {
echo -en "\n*** Ouch! Exiting ***\n"
exit $?
}
# trap keyboard interrupt (control-c)
trap control_c SIGINT
#pause prompt
function pause(){
read -p "$*"
}
#command line syntax error message
function error() {
echo "command line error ${@}"
echo " to backup zfs with snapshots:"
echo " /mnt/<drive>/backrest.sh backup <drive>"
echo " or to restore zfs with snapshots:"
echo " /mnt/<drive>/backrest.sh restore <drive> <ccyymmdd> or sync <drive>"
echo " or to do rsync do ..."
echo " backup drive to /mnt/${back}:"
echo " /mnt/<drive>/backrest.sh sync <drive>"
echo " restore from /mnt/${back}:"
echo " /mnt/<drive>/backrest.sh syncrestore <drive>"
return
}
#check if the drive is mounted or even exists on the system
function is_mounted() {
local FS=$(zfs get -H mounted "${@}")
FS_REGEX="^${@}\s+mounted\s+yes\s+-"
#check for errors
if [ "${FS}" == "" ]; then
status=$? #status from last function call
echo "${@} not found/mounted"
return 0 #evaluates to false - error condition
fi
if [[ $FS =~ $FS_REGEX ]]; then
status=$? #status from last function call
echo "${@} not mounted"
return 0 #evaluates to false - error condition
fi
#drive checked out ok
echo "${@} mounted"
return 1 #evaluates to true - success
}
#wrapper to ensure BACKUP drive and nominated target/destination drive is mounted/exists
function checkMountPoints() {
#checks if the backup/restore drive exists
#check source drive
is_mounted ${@}
status=$? #status from last function call
if [ $status == 1 ];
then
echo "${@} checked out ok"
else
return 0 #evalutes to false error condition
fi
#check backup drive
is_mounted ${back}
status=$? #status from last function call
if [ $status == 1 ];
then
echo "${back} checked out ok"
return 1
else
return 0 #evaluates to false - error condition
fi
}
#rsync across the whole source volume to backup destination volume
function RyncFromSourceToDetachedMediaBackup() {
#checkMountPoints ${drive}
checkMountPoints ${1}
status=$? #status from last function call
if [ $status == 1 ];
then
echo "rsync -pogavt${2} /mnt/${1} /mnt/${back}"
rsync -pogavt${2} /mnt/${1} /mnt/${back}
status=$? #status from last function call
if [ $status == 0 ];
then
echo "rsync from /mnt/${1} to /mnt/${back}/${1} successful"
else
echo "rsync from /mnt/${1} to /mnt/${back}/${1} unsuccessful"
fi
else
status=1
fi
return $status
}
#rsync across the whole backup volume to destination source volume
function RsyncFromBackupToSourceRestore() {
#checkMountPoints ${drive}
checkMountPoints ${1}
status=$? #status from last function call
if [ $status == 1 ];
then
echo "rsync -pogavt${2} /mnt/${back}/${1}/ /mnt/${1}"
rsync -pogavt${2} /mnt/${back}/${1}/ /mnt/${1}
status=$? #status from last function call
if [ $status == 0 ];
then
echo "rsync from /mnt/${back}/${1}/ to /mnt/${1} successful"
else
echo "rsync from /mnt/${back}/${1}/ to /mnt/${1} unsuccessful"
fi
else
status=1
fi
return $status
}
function takesnapshot() {
#echo "start takesnapshot() of ${@} as ${@}-$(date +%Y%m%d)"
zfs snapshot -r ${@}@${@}-$(date +%Y%m%d)
status=$? #status from last function call
if [ $status == 0 ];
then
status=1
else
status=0
fi
#echo "chk outcome takesnapshot() $status"
return $status
}
function renamesnapshot() {
#echo "start renamesnapshot() ${@}@${@}-${date} to ${@}@${@}-$(date +%Y%m%d)-restored_$(date +%Y%m%d)"
zfs rename -r ${@}@${@}-${date} ${@}@${@}-$(date +%Y%m%d)-restored_$(date +%Y%m%d)
status=$? #status from last function call
if [ $status == 0 ];
then
status=1
else
status=0
fi
#echo "chk outcome renamesnapshot() $status"
return $status
}
function deletesnapshot() {
#echo "start deletesnapshot() ${@}@${@}-$(date +%Y%m%d)"
zfs destroy -r ${@}@${@}-$(date +%Y%m%d)
status=$? #status from last function call
if [ $status == 0 ];
then
status=1
else
status=0
fi
#echo "chk outcome deletesnapshot() $status"
return $status
}
function sendsnapshot () {
#echo "start sendsnapshot() ${@}@${@}-$(date +%Y%m%d) to ${back}/${@}-$(date +%Y%m%d)"
zfs send -R ${@}@${@}-$(date +%Y%m%d) | zfs receive -vF ${back}/${@}-$(date +%Y%m%d)
status=$? #status from last function call
if [ $status == 0 ];
then
status=1
else
status=0
fi
#echo "chk outcome sendsnapshot() $status"
return $status
}
#takes the backup of nominated drive, sending it to the BACKUP drive
function doBackup() {
checkMountPoints ${drive}
status=$? #status from last function call
if [ $status == 1 ];
then
echo "about to take snapshot of $drive"
takesnapshot $drive
status=$? #status from last function call
if [ $status == 1 ];
then
echo "about to send snapshot of $drive to $back"
sendsnapshot ${drive}
status=$? #status from last function call
if [ $status == 1 ];
then
echo "about to take snapshot of ${back}"
takesnapshot ${back};
status=$? #status from last function call
if [ $status == 1 ];
then
echo "about to delete snapshot of ${drive}"
deletesnapshot ${drive}
status=$? #status from last function call
if [ $status == 1 ];
then
#echo "about to delete snapshot of ${back}"
#deletesnapshot ${back}
#status=$? #status from last function call
#if [ $status == 1 ];
#then
echo "Successfully backed up ${drive} onto ${back}"
return
#fi
fi
fi
fi
fi
fi
status=$? #status from last function call
error $status
echo "Unsuccessful backup of ${@}"
return
}
function validateDate() {
# Script expecting a Date parameter in YYYYMMDD format as input
echo ${dt} | grep '^[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]$'
status=$? #status from last function call
if [ $status -eq 0 ]; #valid date
then
status=0 #do nothing statement
else
#since previous test failed I will try to reverse the argument assuming a reverse input
echo ${drive} | grep '^[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]$'
status=$? #status from last function call
if [ $status -eq 0 ]; #valid date
then
# parameters in reversed argument order - switching order
tmpdrive="${dt}"
dt="${drive}"
drive="${tmpdrive}"
fi
fi
return $status #0=success, otherwise error
}
#some checks post restore a zfs volume and all datasets
function doRestore() {
echo "Start Restore..."
validateDate
status=$? #status from last function call
#validated date params
if [ $status == 0 ];
then
checkMountPoints ${drive} #check if nominated target/destination drive/volume is mounted/available
status=$? #status from last function call
if [ $status == 1 ]; #target/destination drives(volumes) accessible
then
echo "Initiate restore of $drive from ${back}@${drive}-${dt}"
execRestore ${drive} ${dt} #do the restore of drive for date (assuming it exists for set date)
status=$? #status from last function call
if [ $status == 0 ]; #success
then
echo "Successfully restored $drive from ${back}@${drive}-${dt}"
else
echo "Unsuccessful restore of $drive from ${back}"
error $status
fi
else
echo "${drive} or ${back} not mounted"
error $status
fi
else
echo "${dt} is not a valid date"
error $status
fi
return
}
function destroyDataset () {
echo "about to delete dataset ${@}"
if [ "${@}" == "" ];
then
echo "empty dataset passed"
status=1
else
#zfs destroy -R $back/$drive-$dt/${@}
zfs destroy -R ${@}
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
#echo "Successfully deleted dataset $back/$drive-$dt/${@}"
echo "Successfully deleted dataset ${@}"
else
#echo "Faailed to delete dataset $back/$drive-$dt/${@}"
echo "Failed to delete dataset ${@}"
fi
fi
return $status
}
function jailCleanExists () {
echo "check if ${@} exists"
zfs list ${@}
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
status=1
else
status=0
fi
echo "status from call $status"
return $status
}
function removeJailDependencies () {
exception=0
jailCleanExists ${@}/jails/sickbeard_2
status=$? #status from last function call
if [ $status == 1 ];
then
destroyDataset ${@}/jails/sickbeard_2
status=$? #status from last function call
if [ $status == 0 ];
then
echo "successfully removed ${@}/jails/jails/sickbeard_2"
else
echo "Warning: ${@}/jails/jails/sickbeard_2 not found"
exception=1
fi
fi
jailCleanExists ${@}/jails/plexmediaserver_1
status=$? #status from last function call
if [ $status == 1 ];
then
destroyDataset ${@}/jails/plexmediaserver_1
status=$? #status from last function call
if [ $status == 0 ];
then
echo "successfully removed ${@}/jails/plexmediaserver_1"
else
echo "Warning: ${@}/jails/plexmediaserver_1 not found"
exception=1
fi
fi
jailCleanExists ${@}/jails/firefly_1
status=$? #status from last function call
if [ $status == 1 ];
then
destroyDataset ${@}/jails/firefly_1
status=$? #status from last function call
if [ $status == 0 ];
then
echo "successfully removed ${@}/jails/firefly_1"
else
echo "Warning: ${@}/jails/firefly_1 not found"
exception=1
fi
fi
jailCleanExists ${@}/jails/.warden-template-pluginjail@clean
status=$? #status from last function call
if [ $status == 1 ];
then
destroyDataset ${@}/jails/.warden-template-pluginjail@clean
status=$? #status from last function call
if [ $status == 0 ];
then
echo "successfully removed ${@}/jails/.warden-template-pluginjail@clean"
else
echo "Warning: ${@}/jails/.warden-template-pluginjail@clean not found"
exception=1
fi
fi
status=$exception
return $status
}
#restore a zfs volume and all datasets
function execRestore(){
#echo "Restoring ${drive} for ${dt} from ${back}/${drive}-${dt}..."
#test
echo "Restoring ${back}@${drive}-${dt} to ${drive}..."
#assume the backup for set date exists - the backup name is based on previous backup with inherited source volume name
zfs send -R ${back}@${drive}-${dt} | zfs receive -vF ${drive}/
status=$? #status from last function call
if [ $status == 0 ]; #worked
then
#take a new snapshot from restored data
takesnapshot ${drive}
status=$? #status from last function call
if [ $status == 0 ]; #snapshot successful
then
renamesnapshot ${drive} #rename previous snaphot on the destination volume to prevent accidental restore since the backup release superseeds this snapshot
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
echo "about to delete snapshot of ${back}"
deletesnapshot ${back} #remove the snapshot from the backup drive since it's now been restored (not sure this is the correct approach)
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
echo "about to delete dataset ${back}@${drive}-${dt}"
destroyDataset ${back}@${drive}-${dt}
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
echo "check if ${drive}/jails/.warden-template-pluginjail@clean exists"
jailCleanExists ${drive}/jails/.warden-template-pluginjail@clean
status=$? #status from last function call
if [ $status == 1 ]; #@clean exists
then
echo "remove dependencies for ${back}/${drive}-${dt}/jails/.warden-template-pluginjail@clean"
removeJailDependencies ${back}/${drive}-${dt}
status=$? #status from last function call
if [ $status == 0 ]; #@cleaned
then
echo "Dependencies successfully removed for ${back}/${drive}-${dt}/jails/.warden-template-pluginjail@clean"
status=1
else
echo "Exception thrown while cleaning up dependent datasets for ${back}/${drive}-${dt}/jails/.warden-template-pluginjail@clean"
status=0
fi
else
status=1
fi
else
status=0
fi
fi
fi
fi
fi
return $status #status from last call returned (0=error, 1=success)
}
#main logic (backup or restore)
if [ "${drive}" == "" ];
then
error
else
if [ "${req}" == "backup" ];
then
doBackup $drive
else
if [ "${req}" == "restore" ];
then
doRestore $drive $dt
else
if [ "${req}" == "sync" ];
then
if [ "${force}" == "-f" ];
then
u="u"
fi
echo "WARNING: You are about to rsync /mnt/${drive} onto /mnt/${back}"
pause "Press [Enter]key to continue or CTRL+C to cancel"
RyncFromSourceToDetachedMediaBackup ${drive} ${u}
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
echo "ok"
else
error
fi
else
if [ "${req}" == "syncrestore" ];
then
if [ "${force}" == "-f" ];
then
u="u"
fi
echo "WARNING: You are about to rsync /mnt/${back}/${drive}/ onto /mnt/${drive}"
pause "Press [Enter]key to continue or CTRL+C to cancel"
RsyncFromBackupToSourceRestore ${drive} ${u}
status=$? #status from last function call
if [ $status == 0 ]; #successful rename
then
echo "ok"
else
error
fi
else
error
fi
fi
fi
fi
fi
exit 0
#
Last edited: