Or set up multiple snapshot tasks. Every 5 minutes that last for an hour. Hourly that last for a day. Daily that last for a week. Weekly that last for a month, etc. You don't have to do it all with one snapshot task.
This is another thing you can achieve with a script in the repo that I linked to.
I have a single recursive snapshot task that runs every five minutes. I then run two of my snapshot scripts every hour. The first removes all snapshots that are taking up zero space (clearempty). The second prunes snapshots (rollup). I keep everything for the past two hours, then only one snapshot per hour for the last day, then one per day for the last week, and then I forget the specifics. It's either one per week for the last month, and then monthly for the last five years, or it's weekly for the last five years. I don't think I actually have anything going back that far, but those are the limits that I have set.
This provides a good balance between being able to recover data and not having an unmanageable number of snapshots to look through. And, if I ever delete a large amount of data that I want to reclaim, I can run the snap-strip script to drop everything for that dataset.