johnnychicago
Dabbler
- Joined
- Mar 3, 2016
- Messages
- 37
OK - so I've been using Freenas for a few weeks and really like it. I've grown to the concept that I want to cover my a* using snapshots, and that I will not need backups to revert to old files.
Which is a good thing.
Backups are required only for the worst case - fire, theft, these kind of things. Ideally my (off site) backup NAS will connect regularly over the WAN, get replicated, and then goes offline. I hope it never gets read.
If it is, though, it should be able to reliably come up with a complete copy of all my data. It may be acceptable to have losses in individual, well defined files, but an error that would invalidate a major chunk of the data would be very bad.
So, while I am quite relax on factors like noise, performance, or power requirements (since the beast will usually only run infrequently), what should I be looking for in terms of drive configuration - without breaking the bank.
Can I get away with single drives? I understood ZFS will be able to find discrepancies by looking at parity, obviously without being able to solve them. If a monthly scrub were to find something, would the next replication run be able to update that from the live NAS? I think I would not want to stripe single drives in this case, but having something like Seagate's 8TB archive drive in there seems like a reasonable cost option for backup. If that's not big enough, a second pool on a second drive, and split my replication accordingly?
Or maybe I should find a config that accepts tons of drives and build multiple mirrors from old drives?
Any best practices on this? What's my reasonable chance to recover if the backup goes 'partly' bad? What would 'partly' mean in this setting?
Which is a good thing.
Backups are required only for the worst case - fire, theft, these kind of things. Ideally my (off site) backup NAS will connect regularly over the WAN, get replicated, and then goes offline. I hope it never gets read.
If it is, though, it should be able to reliably come up with a complete copy of all my data. It may be acceptable to have losses in individual, well defined files, but an error that would invalidate a major chunk of the data would be very bad.
So, while I am quite relax on factors like noise, performance, or power requirements (since the beast will usually only run infrequently), what should I be looking for in terms of drive configuration - without breaking the bank.
Can I get away with single drives? I understood ZFS will be able to find discrepancies by looking at parity, obviously without being able to solve them. If a monthly scrub were to find something, would the next replication run be able to update that from the live NAS? I think I would not want to stripe single drives in this case, but having something like Seagate's 8TB archive drive in there seems like a reasonable cost option for backup. If that's not big enough, a second pool on a second drive, and split my replication accordingly?
Or maybe I should find a config that accepts tons of drives and build multiple mirrors from old drives?
Any best practices on this? What's my reasonable chance to recover if the backup goes 'partly' bad? What would 'partly' mean in this setting?