hungarianhc
Patron
- Joined
- Mar 11, 2014
- Messages
- 234
Hi There,
I have a rock solid FreeNAS system at home w/ a C2750 motherboard, 32GB of RAM, and a Raid-Z2 pool. At my parents' house a few hundred miles away, I have a lower cost setup for them. It's got a brand new 6TB drive (no redundancy), 16GB non-ECC RAM, and an older Core2Quad CPU.
I use my server for a lot of things, but I also replicate to theirs, currently using syncthing, and it ensures that when I rip a blu-ray, it gets backed up to an off-site location (benefit to me), and it also gives them a Plex server (which they love).
Here's the issue, though. I VPN to their network every so often, and I see that there's a ZFS error on one of the drives. It's an error on a directory, but as you can see from the name of the directory, it's a pretty crucial root folder.
Okay... so it's possible that the drive is failing, but consider it's about a six week old Red drive, I don't THINK that's the issue, but I can't rule it out. It's also possible that my non-ECC RAM setup for them has something to do with it, but in terms of solutions, the 'easiest' solution is to do what I did to set it up, but it's one that I can't fix any time soon... I get the drive back from them next time I visit, bring it back to my place, wipe the drive out, hook it up to my NAS, run rsync, sync the media folder over again, and then next time I visit them for the second time, put the drive back in and re-config the NAS.
Then I thought about ZFS replication... would that work? If I changed the way I replicate data from syncthing to ZFS replication, would that fix the metadata in the file if it replicates from my server, where there isn't a permanent error in the file? Any other suggestions with how to fix this remotely without completely wiping out the directory, which I suppose I COULD do... Maybe I'll end up upgrading their setup to a bit more robust of one like mine w/ ECC, etc, but for now, I'll probably take the lower cost route. Thanks in advance!
I have a rock solid FreeNAS system at home w/ a C2750 motherboard, 32GB of RAM, and a Raid-Z2 pool. At my parents' house a few hundred miles away, I have a lower cost setup for them. It's got a brand new 6TB drive (no redundancy), 16GB non-ECC RAM, and an older Core2Quad CPU.
I use my server for a lot of things, but I also replicate to theirs, currently using syncthing, and it ensures that when I rip a blu-ray, it gets backed up to an off-site location (benefit to me), and it also gives them a Plex server (which they love).
Here's the issue, though. I VPN to their network every so often, and I see that there's a ZFS error on one of the drives. It's an error on a directory, but as you can see from the name of the directory, it's a pretty crucial root folder.
pool: Tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub repaired 0 in 9h16m with 1 errors on Sat Oct 31 09:16:07 2015
config:
NAME STATE READ WRITE CKSUM
Tank ONLINE 0 0 0
gptid/4146ba9e-6239-11e5-a501-d050991929b1 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
Tank/Media:<0x36d7>
Okay... so it's possible that the drive is failing, but consider it's about a six week old Red drive, I don't THINK that's the issue, but I can't rule it out. It's also possible that my non-ECC RAM setup for them has something to do with it, but in terms of solutions, the 'easiest' solution is to do what I did to set it up, but it's one that I can't fix any time soon... I get the drive back from them next time I visit, bring it back to my place, wipe the drive out, hook it up to my NAS, run rsync, sync the media folder over again, and then next time I visit them for the second time, put the drive back in and re-config the NAS.
Then I thought about ZFS replication... would that work? If I changed the way I replicate data from syncthing to ZFS replication, would that fix the metadata in the file if it replicates from my server, where there isn't a permanent error in the file? Any other suggestions with how to fix this remotely without completely wiping out the directory, which I suppose I COULD do... Maybe I'll end up upgrading their setup to a bit more robust of one like mine w/ ECC, etc, but for now, I'll probably take the lower cost route. Thanks in advance!