Keeping Snapshots for years

Status
Not open for further replies.

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
Just setting up a server to store our critical company data like accounts, financials etc. I'd ideally like to retain the snapshots we're capturing for several years. Besides the storage implications are there any other performance or reliability implications of retaining so much snapshotted data?
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
No, the amount of snapshots definitely does not influence the reliability and should not notably influence performance (except maybe for the case when you try to list all snapshots).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Just keep in mind the performance implication of filling up volumes excessively. Given enough storage, feel free to keep them as long as you want.
 
L

L

Guest
Personally I would stage them to another server a big fat archive server. That might be what you are doing already. Replication is so easy in freenas.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
Hi

You can always get the data from backups. I suppose you have a tape backup in place.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There have been some who have observed trouble with excessively large numbers of snapshots (four figures or more). I would suggest not making a guinea pig of yourself and your data. Use snapshots on a short cycle for your live server, then replicate to another one on a much longer cycle.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I have been doing local replicating from one pool to another within my system and with the many dataset present I had accumulated 1000's of snapshot. I was running replication using Freenas replication GUI, but I couldn't understand why it would literally take hours for maybe a few hundred snapshots to replicate, some well actually most of them reporting no significant change in size. So I tried a manual incremental replication of the same dataset and I noticed was a huge performance increase. Roughly one 312B stream per second versus maybe several minutes through GUI.

I believe that maybe the script running the replication is based on a timed event and pulls the status of the replication task at a fixed interval maybe once a minute or longer, or simply browsing through the entire snapshots recursively is what's taking time.

Some abstract of the 'messages' files in ./system/syslog/log contains the following:

Code:
Aug 25 22:00:21 freenas autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs list -Hr -o name -t snapshot -d 1 western-backup/Media_library/Photo | tail -n 1 | cut -d@ -f2"
Aug 25 22:00:21 freenas autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "/sbin/zfs inherit freenas:state western-backup/Media_library/Photo@auto-20140823.2142-2w"
Aug 25 22:00:21 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state WD-RAIDZ2/Media_library/Photo@auto-20140823.2112-2w
Aug 25 22:00:22 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2112-2w
Aug 25 22:00:22 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST WD-RAIDZ2/Media_library/Photo@auto-20140823.2142-2w
Aug 25 22:00:22 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2142-2w
Aug 25 22:02:45 freenas autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs list -Hr -o name -t snapshot -d 1 western-backup/Media_library/Photo | tail -n 1 | cut -d@ -f2"
Aug 25 22:02:45 freenas autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "/sbin/zfs inherit freenas:state western-backup/Media_library/Photo@auto-20140823.2143-2w"
Aug 25 22:02:45 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state WD-RAIDZ2/Media_library/Photo@auto-20140823.2142-2w
Aug 25 22:02:46 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2142-2w
Aug 25 22:02:46 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST WD-RAIDZ2/Media_library/Photo@auto-20140823.2143-2w
Aug 25 22:02:46 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2143-2w
Aug 25 22:05:07 freenas autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs list -Hr -o name -t snapshot -d 1 western-backup/Media_library/Photo | tail -n 1 | cut -d@ -f2"
Aug 25 22:05:07 freenas autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "/sbin/zfs inherit freenas:state western-backup/Media_library/Photo@auto-20140823.2213-2w"
Aug 25 22:05:07 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state WD-RAIDZ2/Media_library/Photo@auto-20140823.2143-2w
Aug 25 22:05:08 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2143-2w
Aug 25 22:05:08 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST WD-RAIDZ2/Media_library/Photo@auto-20140823.2213-2w
Aug 25 22:05:08 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2213-2w
Aug 25 22:07:29 freenas autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs list -Hr -o name -t snapshot -d 1 western-backup/Media_library/Photo | tail -n 1 | cut -d@ -f2"
Aug 25 22:07:29 freenas autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "/sbin/zfs inherit freenas:state western-backup/Media_library/Photo@auto-20140823.2214-2w"
Aug 25 22:07:29 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state WD-RAIDZ2/Media_library/Photo@auto-20140823.2213-2w
Aug 25 22:07:30 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2213-2w
Aug 25 22:07:30 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST WD-RAIDZ2/Media_library/Photo@auto-20140823.2214-2w
Aug 25 22:07:30 freenas autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl WD-RAIDZ2/Media_library/Photo@auto-20140823.2214-2w



These snapshots are the one that will most likely produce a stream of 312B.
What is it about the GUI replication function that take so much time compare to a zfs send -I ... | zfs recv .... command line?
 
Last edited:

panz

Guru
Joined
May 24, 2013
Messages
556
I noticed that the developers choose to use dd to pipe the zfs send/recv routines. Maybe this is relevant.
 
Status
Not open for further replies.
Top