Are consumer grade NAS silent data killer DA BOMBA due to lack of bit rot protection?

Status
Not open for further replies.

maglin

Patron
Joined
Jun 20, 2015
Messages
299
I didn't know about BTRFS. personally I think bit rot and its effects are being over stated. But with FreeNAS and scrubs it's a non issue. I'm looking forward to the day I see one of my scrubs repair the pool. Then I'll know it fixed either bit rot or an unreadable sector.


Sent from my iPhone using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I didn't know about BTRFS. personally I think bit rot and its effects are being over stated.

Not overstated. Not horribly common by some measures, but as the number of bits stored increases, and the length of time bits are stored increases, the statistical likelihood is that there'll be some bit flips or data loss. It can be a sector on disk that suddenly becomes unreadable. Conventional filesystems don't have the ability to detect and then pull from a redundancy resource. You *do* see some bit rot with enough time. If nothing else, the likelihood that a hard drive mechanism will fail after it reaches a certain age rockets up pretty quickly.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I didn't know about BTRFS. personally I think bit rot and its effects are being over stated. But with FreeNAS and scrubs it's a non issue. I'm looking forward to the day I see one of my scrubs repair the pool. Then I'll know it fixed either bit rot or an unreadable sector.

Twice I've had a scrub repair a single sector with no indication that there was anything otherwise wrong with the hard drive. There were no reallocated sectors, no offline uncorrectable, pending sectors, etc. Long smart tests were ok. No read errors reported to the OS. I can only assume these two incidences were bitrot / silent data corruption.

Would it have been a big deal if they weren't caught? Maybe maybe not. A lot of my data is media, so a bad bit here or there isn't going to render the rest of the data useless. Some of my data is software archives though, and that would be an issue there obviously.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One thing with ZFS, (and BTRFS), if they detect an error AND have redudancy, (of any type),
they will automatically try to re-write the failed data. Thus, mostly auto-correcting, which the
older file systems like UFS, EXT3/4 etc... don't do. Nor does Linux's MD-RAID or LVM, (with
mirroring).

In titan_rw's cases of a scrub catching a single bad sector, if he attempted to read the bad
sector before the scrub found it, it would have been repaired automatically.

Thus, I really prefer ZFS, (and to a lesser extent BTRFS), over any other RAID, (software or
hardware).

There are other reason to prefer ZFS software RAID over non-integrated RAID. Some of which
I have experienced in Solaris production servers. Specifically single bad block on disk 0, and
then lots of bad blocks on disk 1. But, different blocks so disk 0's bad block(s) can be recovered.
 
Status
Not open for further replies.
Top