- Joined
- Mar 6, 2014
- Messages
- 9,553
Restore from backup? :pThere's no good way to recover from the latter case.
Restore from backup? :pThere's no good way to recover from the latter case.
I said good.Restore from backup? :p
Your day is not a good one if you don't have to reach for the backups.
Reminds me of this. "If it starts pointing toward space you are having a bad problem and you will not go to space today."Your day is already not a good one if you have to reach for the backups.
And, as it has been already said, nothing is stopping you from continuing using SyncToy, if that is what you like.
When you use NAS like this, you have to realize that if a file gets corrupted (or cryptolocked) on your workstation, the next synchronization may override a good file on NAS with a corrupted file from the local computer.
One way to soften the blow is to configure automatic snapshots on the FreeNAS, and save those snapshots for long time [forever maybe?]. If you notice a corrupted file, and you still have the snapshots dating back to when the file was good, you can restore the file from the snapshot.
Just to add after reading through the thread...
- WD RED NAS drives would be a better drive then the HGST drives for your application. The REDs require less power, run quieter and also cooler then the HGST.
- Also, either add a second SSD drive OR use 2x thumb drives like the Cruzer FIT 32G and mirror them!
A snapshot only takes up as much space as the changes that have been made to the filesystem since the snapshot occurred. I'm going to politely suggest that you do some reading up on ZFS.Are snapshots 1:1 in size?
I like how open BackBlaze is as a company, and the way they share so much data and their hardware designs and even some of their source code. I appreciate that they liberally sprinkle their blog posts about drive failure rates with "in our environment" and similar disclaimers. But I dislike their use of SEO-gimmick titles like "What is the Best Hard Drive" and slugs like "best-hard-drive".
A snapshot only takes up as much space as the changes that have been made to the filesystem since the snapshot occurred. I'm going to politely suggest that you do some reading up on ZFS.
I like how open BackBlaze is as a company, and the way they share so much data and their hardware designs and even some of their source code. I appreciate that they liberally sprinkle their blog posts about drive failure rates with "in our environment" and similar disclaimers. But I dislike their use of SEO-gimmick titles like "What is the Best Hard Drive" and slugs like "best-hard-drive".
If you look behind the BackBlaze data, and the oft-quoted Google study of hard drive failures, you'll find the same circular definition of hard drive failure: "A hard drive is considered 'failed' because it was replaced."
Why did you replace that hard drive?
Because it failed.
How do you know it failed?
Because we replaced it.
etc.
I would start with wikipedia and go from there.Are there any documents you can recommend for my purposes?
Quoting directly from an earlier blog post: "A failure is when we have to replace a drive in a pod."I was disappointed that there was no readily available explanation on BackBlaze's website of what exactly constituted a drive failure.
Just leave the default compression enabled on the whole volume ;)
And go with 4TB drives. I think they came down in price sufficiently. There is no such thing as too much storage, and with ZFS you do want to overprovision.
With a triple-mirror you will have 3TB before any overhead. You wanted 2.5TB of useful space, so it is cutting close.
Pictures and videos in already compressed formats do not additionally compress well.
Also, I am not sure on the power consumption of those 7200 RPM dives. There are other drives, such as WD-Red and Seagate-NAS, which spin a little slower and should consume less electricity.
https://forums.freenas.org/index.ph...mirror-to-3-mirror-on-encrypted-volume.22562/Can you even do triple mirrors in FreeNAS? Can three vdevs be mirrored to cross-reference 2 disks?
Sounds like a workflow nightmare to me. Instead of compressing you may want to look into tools that let you checksum your files and keep that information for checking the checksums later...And I understand now what Bidule0hm meant by 'original compression.' I'm not worried about saving space or anything, I just wonder if sticking the files in an archive would potentially preserve them better?
Why is *anyone* talking about compressing the files into .zips or storing checksums. ZFS not only checksums the blocks the files are stored in, but the can compress the blocks too. So why would anyone want to do anything manually. Just set your compression to whatever you want and copy the files over as-is!