I'm not sure if it's still the case, but for a while there was a bug where multiple snapshot schedules could cause problems with replication. That would mean the only use case for a weekly snapshot schedule would be where your data changed so infrequently that rolling back a week is acceptable. Honestly though, it's probably just not a use case that has been given much thought.
So, to use my script you'd want a snapshot something like:
Code:
rollup.py -r -i hourly,daily,weekly,monthly,yearly:0 tank
The "-r" means look at all child datasets, not just the top level. If you have multiple datasets and want to apply different rules to each, just drop the "-r" and issue the command once for each dataset (you can specify multiple datasets at a time if they all fall into the same interval set). The "-i hourly,..." defines the interval rules to consider. The "yearly:0" means the script will keep as many as accumulate. Normally, the number following the colon is how many snapshots to keep, but I use "0" to keep everything. Looking at the script I just realized that the "0" behavior isn't documented; I should fix that.
Anyway, with this command run every hour, and a single snapshot task set to once an hour with expiration set to however long you want the yearly snapshots to last (in your case, however high you can set the expiration), you should end up with the hourly, daily, weekly, monthly, yearly intervals that you're looking for.
Until you feel comfortable with the script running unmonitored, you can run the command on your own and add the "-t" flag to see what snapshots would be deleted (the "zfs destroy <snapshot>" command won't be run) and "-v" to see which snapshots are being kept and why (you'll see columns indicating which intervals the snapshots satisfy).
I've been running the script for around a year in its present form and haven't suffered any data loss. I've had some feedback from other people that are using replication and it sounds like it's been working well for them. The only issue I'll see is that sometimes when my machine is under very high load, I'll get emails indicating the script couldn't find any snapshots to destroy. I assume this is a timeout in one of the zfs commands. I should add the ability to silence those errors as the script will catch any extra snapshots during the next iteration.