Timemachine backups for several computers, individual datasets recommended?

Status
Not open for further replies.

-fun-

Contributor
Joined
Oct 27, 2015
Messages
171
Hi out there!

I'm currently trying to find out how to best set up my FreeNAS server for timemachine backups for several Macs in the network (well, three of them, but may become more).

I would just create one shared dataset for all Macs. Afaik timemachine just grows its sparsebundle as far as required and possible and then deletes oldest backups to make newer backups fit. So there already is a mechanism for balancing the storage usage.

question: Would you recommend using one dataset for all Macs or is there a good reason to create separate datasets for each Machine in the network?

Another question: Does it make any sense to use compression in the dataset?

Thank you!

-fun-

P.S: I case this is of any relevance for the question: I'm running FreeNAS-9.3-STABLE-201512121950 on a HP ProLiant Gen.8 MicroServer. Currently with only one disk and no USV. I'm testing / evaluating. :)
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Would you recommend using one dataset for all Macs or is there a good reason to create separate datasets for each Machine in the network?
The main reason to use one dataset per Mac is so you can revert to an earlier snapshot for an individual machine when its backup fails. A secondary reason would be if you want to give each machine a different storage quota.
 

-fun-

Contributor
Joined
Oct 27, 2015
Messages
171
Thank you! I didn't think of using snapshots for this. Is this really a good idea when there is only one really large disk image per Mac? The question probably is whether the file can be changed in large portions even if there is only a small change within the file. In the worst case the complete image would have to be duplicated for a snapshot, right?

What effect do you experience on storage consumption in such a setup?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I don't use Time Machine over AFP, because I found it to be unreliable, with periodic failures that require throwing out the whole backup and starting over. Those who do use it find they can mitigate this by reverting to an earlier snapshot when something goes wrong.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
I'm backing up 3 macs using TimeMachine over AFP to my FreeNAS box without any issues.

Each has its own dataset so I can control the size and I take daily snapshots to replicate to a second FreeNAS box.
 

-fun-

Contributor
Joined
Oct 27, 2015
Messages
171
That's encouraging. I have had trouble with timemachine several times forcing me to start a new backup. I experienced this in combination with changes in the networking setup on my machine (for example changing interfaces from WLAN to Ethernet and vice versa). This seems to trouble timemachine. I never bothered to investigate this in detail. Although I don't appreciate having to start over timemachine backups (takes a lot of time) I don't consider this a real problem. I have also other backups anyway. I always only considered timemachine a convenient way to get back to a previous version of a file or email. I never used that very much.

Two other Macs are running timemachine here without any problem ever.

I will setup seperate datasets then.

I let my Mac create a complete backup the last night. The compression ratio was 1.05. I guess this is not worth enabling compression anyway. Does any of you use compression on datasets dedicated to timemachine backups?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Any compression ratio over 1.00 is worth it usually. You're trading CPU resources for space and usually you have more than enough CPU resources, so the space gain is basically free :)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Does any of you use compression on datasets dedicated to timemachine backups?
I leave compression enabled on all datasets. The default, LZ4, is smart enough not to waste space on incompressible data, and so fast on modern CPUs that it will never hurt performance.
 
Status
Not open for further replies.
Top