RSYNC target ending up with too much data?

Status
Not open for further replies.

jrodder

Dabbler
Joined
Nov 10, 2011
Messages
28
I'm kinda stumped. I have 2 freenas servers, both in RAIDZ1 with 4x2TB disks. Trying to RYSNC between the two to migrate to new hardware. I had to stop the process since it was obvious something was wrong, I just don't know what. I didn't have 'delete' option enabled, but I certainly haven't added 700GB of storage that isn't accounted for.
Can anyone help me out as far as what to look for? Would snapshots existing on the pushing server maybe cause something like that?

*edit*
Now that I've looked at the zfs output, it looks like they might be closer than the GUI suggested. I realize I do need to free up some space, as to not be over that 80% threshold. However, I'm not exactly sure what is happening with the snapshot data in this case, or really why it's so effing large. I only keep them for 2 weeks, and I certainly haven't added in 400GB of data in the last two weeks.

Ah well, if anyone can provide guidance/input I'd greatly appreciate it.

Screenshot from 2015-04-05 07:32:28.png
Screenshot from 2015-04-05 07:32:40.png

Screenshot from 2015-04-05 07:50:42.png
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
The amount of space a snapshot consumes is related to how much has changed on the dataset since the snapshot was taken. So one way you could have 406 GB occupied by a snapshot is to delete 406GB since it was taken. This is just an example from someone who isn't exactly an expert...
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Another thing that bit me with Rsync, is "--sparse" option. Some files in Unix, (don't know about Windows), are sparse. Perhaps you copied sparse files, filling in the holes with allocated space on the destination.

Here is an example of a sparse file;

Code:
/var/log> ls -l lastlog
-rw-r--r-- 1 root root 292292 Apr  4 10:08 lastlog
/var/log> du -sk lastlog
16   lastlog
 

j_r0dd

Contributor
Joined
Jan 26, 2015
Messages
134
I use rsnapshot to backup 2 laptops to a NFS share. It uses rsync and it makes hourly, daily, weekly and monthly backups. It uses hard links for the files that haven't changed. When I was migrating from my Linux NAS to FreeNAS I was eating up a ton of space when rsync'ing the directory that had all these backups. I soon realized when I was moving this over that rsync was re-creating each file instead of just copying the hard links. Looking at the man page I discovered the -H flag which preserves hard links. Not sure if that applies to your situation, but it saved me a lot of time and space.
 

jrodder

Dabbler
Joined
Nov 10, 2011
Messages
28
I appreciate the replies thusfar. I have to go grab a decent gigabit switch, and I have deleted all the snapshots and some other data that I really didn't need, got it down to reporting 3.5TB. There's a syslog folder that has 110GB of logs, I think that's a bit excessive, might have to look at that. I destroyed the dataset on the target freenas server. Maybe anodos I will look at that, I just always used rsync in other instances and environments. I am going to give it another run this week, and post back. I just had no idea where all the 'extra' data was coming from, I didn't see a real reason how I can have a zfs reporting 4.3 TB, and end up with a full array at 5.3 TB on the other end, you know?
 
Status
Not open for further replies.
Top