Snapshot replication size

Status
Not open for further replies.

dpearcefl

Contributor
Joined
Aug 4, 2015
Messages
145
I I run the following command I get this result:
Code:
% ps aux | grep "zfs:"
root          4275    8.2  0.0  48608   3728  -  S    10:58AM     2:30.52 zfs: sending tank1/Storage1/VMLinks@auto-20160616.1400-100y (54%: 179632353040/329792907320) (zfs)


Then if I run this command I get this result:
Code:
% zfs get used | grep "VMLinks@auto-20160616.1400-100y"
tank1/Storage1/VMLinks@auto-20160616.1400-100y                              used      183K   -


My replication tasks are taking forever to run over a 1 Gb connection.

Is the "ps aux" line blocks to be transferred?

The receiving server is getting 500 Mb/sec inbound from this server.

How can I tell how big a snapshot is?

Thanks for helping me understand replication.
 

dpearcefl

Contributor
Joined
Aug 4, 2015
Messages
145
Why am I interested in this information?

My two FreeNAS boxes are going to soon be separated by 1500 miles, connected by a 100 Mb/sec connection. I'm trying to determine how big the snapshots are when they travel over the internet so I can determine if the pipe is going to be big enough.
 

dpearcefl

Contributor
Joined
Aug 4, 2015
Messages
145
"zfs get used | grep "VMLinks@auto-20160616.1400-100y"" Seems to return a very small size compared to the size of the snapshot itself. Perhap it is just metadata for the snapshot?

Now is I run this command I get the actual size of all of the files that comprise the snapshot. So it's a "full" not an incremental. (commas added by me)
Code:
# zfs send -nPv tank1/Storage1/VMLinks@auto-20160616.1400-100y
full    tank1/Storage1/VMLinks@auto-20160616.1400-100y  11,772,585,294,416
size    11,772,585,294,416


This will be without compression that may be used during the "zfs send".

So it would seem that when a snapshot is replicated to another dataset, a "full" snapshot of the sent dataset is used. So if you have a 1 TB dataset that you snapshot 4 times a day, ignoring compression you will be sending 4 TB across the wire per day. Is this correct?
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
The first replication is full, all successive ones are incremental. That particular snapshot you looked at is just small :)
 

Jon K

Explorer
Joined
Jun 6, 2016
Messages
82
Why am I interested in this information?

My two FreeNAS boxes are going to soon be separated by 1500 miles, connected by a 100 Mb/sec connection. I'm trying to determine how big the snapshots are when they travel over the internet so I can determine if the pipe is going to be big enough.

Not to make you sad, but you're not going to need to worry if the "pipe is going to be big enough". I am an engineer who focuses in virtualization, etc. for a cloud hosting provider. I have had, time and again, clients 1,000 - 3,000 miles away boast about 250 Mbps, 300 Mbps, 500+ Mbps connections. When we peer up our devices and I start to migrate their data they can usually only achieve fractions of their advertised throughput. Why? Latency. If you're sending via TCP you need a transmit and an acknowledgement. Ping the two locations and you'll find the round trip latency - go find a calculator on the web for latency bandwidth transfer and you'll know what size pipe you need.
 

dpearcefl

Contributor
Joined
Aug 4, 2015
Messages
145
Yes, you are completely right. Any calculations without latency figured in will be grossly wrong.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Why am I interested in this information?

My two FreeNAS boxes are going to soon be separated by 1500 miles, connected by a 100 Mb/sec connection. I'm trying to determine how big the snapshots are when they travel over the internet so I can determine if the pipe is going to be big enough.
Snapshot size is a function of how much data has changed since the last snapshot.
This can help view the sizes of the snapshots: http://docs.oracle.com/cd/E19253-01/819-5461/gbiqe/
And as was mentioned, FreeNAS uses incrementals after the initial and latency is an issue. You can investigate using znapzend (someone on the forums recently installed it and is seems to be working well) and the use of mbuffer.
 
Status
Not open for further replies.
Top