10.7TiB ZFSR2 Is this normal behavior? Can I trust FreeNAS with my data?

Status
Not open for further replies.

frr792

Cadet
Joined
Feb 8, 2012
Messages
4
Hey I'm a windows guy making an effort to explore the other side. I have built this to test FreeNAS and determine if I can trust it to safely store and backup my data.

You can see my hardware/freenas specs in my signature. I've been getting a lot of log traffic I don't know what to make of.

Can someone interpret these messages and help me understand their severity? much obliged

DMESG
Code:
da0 at mpt0 bus 0 scbus0 target 0 lun 0
da0: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da8 at umass-sim0 bus 0 scbus5 target 0 lun 0
da8: <SanDisk Cruzer 1.00> Removable Direct Access SCSI-2 device
da8: 40.000MB/s transfers
da8: 3827MB (7837696 512 byte sectors: 255H 63S/T 487C)
da1 at mpt0 bus 0 scbus0 target 1 lun 0
da1: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da1: 300.000MB/s transfers
da1: Command Queueing enabled
da1: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da2 at mpt0 bus 0 scbus0 target 2 lun 0
da2: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da2: 300.000MB/s transfers
da2: Command Queueing enabled
da2: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da3 at mpt0 bus 0 scbus0 target 3 lun 0
da3: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da3: 300.000MB/s transfers
da3: Command Queueing enabled
da3: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da4 at mpt0 bus 0 scbus0 target 4 lun 0
da4: <ATA WDC WD20EADS-00R 0A01> Fixed Direct Access SCSI-5 device
da4: 300.000MB/s transfers
da4: Command Queueing enabled
da4: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da5 at mpt0 bus 0 scbus0 target 5 lun 0
da5: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da5: 300.000MB/s transfers
da5: Command Queueing enabled
da5: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da6 at mpt0 bus 0 scbus0 target 6 lun 0
da6: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da6: 300.000MB/s transfers
da6: Command Queueing enabled
da6: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
da7 at mpt0 bus 0 scbus0 target 7 lun 0
da7: <ATA WDC WD20EADS-00S 0A01> Fixed Direct Access SCSI-5 device
da7: 300.000MB/s transfers
da7: Command Queueing enabled
da7: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
SMP: AP CPU #1 Launched!
GEOM: da8s1: geometry does not match label (16h,63s != 255h,63s).
Trying to mount root from ufs:/dev/ufs/FreeNASs1a
ZFS filesystem version 4
ZFS storage pool version 15
(da0:mpt0:0:0:0): READ(10). CDB: 28 0 2a c4 c5 e 0 0 2b 0
(da0:mpt0:0:0:0): CAM status: SCSI Status Error
(da0:mpt0:0:0:0): SCSI status: Check Condition
(da0:mpt0:0:0:0): SCSI sense: MEDIUM ERROR info:2ac4c50f asc:11,0 (Unrecovered read error)
(da0:mpt0:0:0:0): READ(10). CDB: 28 0 2a c4 c5 e 0 0 2b 0
(da0:mpt0:0:0:0): CAM status: SCSI Status Error
(da0:mpt0:0:0:0): SCSI status: Check Condition
(da0:mpt0:0:0:0): SCSI sense: MEDIUM ERROR info:2ac4c512 asc:11,0 (Unrecovered read error)
(da0:mpt0:0:0:0): READ(10). CDB: 28 0 2a c4 c5 e 0 0 2b 0
(da0:mpt0:0:0:0): CAM status: SCSI Status Error
(da0:mpt0:0:0:0): SCSI status: Check Condition
(da0:mpt0:0:0:0): SCSI sense: MEDIUM ERROR info:2ac4c50f asc:11,0 (Unrecovered read error)
(da0:mpt0:0:0:0): READ(10). CDB: 28 0 2a c4 c5 e 0 0 2b 0
(da0:mpt0:0:0:0): CAM status: SCSI Status Error
(da0:mpt0:0:0:0): SCSI status: Check Condition
(da0:mpt0:0:0:0): SCSI sense: MEDIUM ERROR info:2ac4c512 asc:11,0 (Unrecovered read error)
(da0:mpt0:0:0:0): READ(10). CDB: 28 0 2a c4 c5 e 0 0 2b 0
(da0:mpt0:0:0:0): CAM status: SCSI Status Error
(da0:mpt0:0:0:0): SCSI status: Check Condition
(da0:mpt0:0:0:0): SCSI sense: MEDIUM ERROR info:2ac4c50f asc:11,0 (Unrecovered read error)


/var/log/messages
Code:
Feb  7 21:12:15 freenas ntpd[1647]: kernel time sync status change 2001
Feb  7 22:30:09 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da5p2 offset=366535113728 size=22016
Feb  7 22:32:23 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da0p2 offset=365784224256 size=22016
Feb  7 22:32:23 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da1p2 offset=365784224256 size=22016
Feb  7 22:36:04 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da1p2 offset=577791152640 size=22016
Feb  7 22:37:57 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da4p2 offset=671108917248 size=21504
Feb  7 22:49:40 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da5p2 offset=550663688704 size=22016
Feb  7 22:52:39 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da2p2 offset=365954425344 size=22016
Feb  7 23:08:52 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da3p2 offset=366004417536 size=21504
Feb  7 23:08:52 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da2p2 offset=840747707904 size=22016
Feb  7 23:09:43 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da5p2 offset=550637620224 size=21504
Feb  7 23:11:30 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da1p2 offset=841753417728 size=22016
Feb  7 23:20:58 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da1p2 offset=670752142848 size=22016
Feb  7 23:26:11 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da0p2 offset=364851664384 size=21504
Feb  7 23:58:10 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da3p2 offset=671267539456 size=22016
Feb  8 00:00:42 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da6p2 offset=364886071808 size=22016
Feb  8 00:00:47 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da4p2 offset=365907234816 size=21504
Feb  8 00:17:29 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da3p2 offset=365721624576 size=21504
Feb  8 00:21:17 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da4p2 offset=827772291072 size=21504
Feb  8 00:32:32 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da7p2 offset=671597391360 size=22016
Feb  8 00:32:53 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da4p2 offset=366536164352 size=22016
Feb  8 00:59:37 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da7p2 offset=840844299264 size=22016
Feb  8 01:04:11 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da5p2 offset=674906525184 size=22016
Feb  8 01:04:37 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da2p2 offset=840829131264 size=22016
Feb  8 01:05:01 freenas kernel: pid 6644 (python), uid 0: exited on signal 10
Feb  8 01:14:33 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da7p2 offset=674940188672 size=22016
Feb  8 02:12:32 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da0p2 offset=467274264576 size=21504
Feb  8 02:34:49 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da5p2 offset=834673201152 size=22016
Feb  8 02:56:16 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da7p2 offset=454309485056 size=22016
Feb  8 03:25:22 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da6p2 offset=454674489344 size=22016
Feb  8 03:31:38 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da4p2 offset=465748211712 size=21504
Feb  8 03:56:02 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da2p2 offset=836483054592 size=22016
Feb  8 04:36:33 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da3p2 offset=670385190400 size=22016
Feb  8 04:55:01 freenas kernel: pid 10323 (python), uid 0: exited on signal 11
Feb  8 06:52:25 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da6p2 offset=646700324352 size=22016
Feb  8 07:20:17 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da2p2 offset=660037121536 size=21504
Feb  8 08:15:07 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da1p2 offset=828228152320 size=22016
Feb  8 08:42:32 freenas root: ZFS: checksum mismatch, zpool=RINZLER path=/dev/da6p2 offset=448742734336 size=22016
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I would tend to trust FreeBSD (FreeNAS) in this case and say there is something odd at the hardware level going on. ZFS is telling you that the data its reading doesn't match the data that was written.

It does seem a little odd that all spindles are acting the same, I don't know if they are 4k disks or not, or if you set them up as 4k.
I would also check out the motherboard, perhaps try putting the disks on an external (pci-e) controller and see if the problem persists.
 

frr792

Cadet
Joined
Feb 8, 2012
Messages
4
Is ZFS just telling me that the checksum doesnt match, or is it also correcting it?

These disks are not 4K disks, and I did not set them up that way either. All 8 of these disks are on the Intel SAS card that I applied the non raid LSI firmware to.

I wonder if I should shut down FreeNAS pull out da0, da7 and a random..da3? to run smart tests from my desktop to see if it matches. This would confirm that the SMART test messages I have been receiving are legit or not. I just pulled these drives out of a Windows7 machine in RAID5 on a RR2680 that reported no SMART errors.

Another thing I could do is upgrade to the more recent release or beta of FreeNAS and see if the same messages will occur?
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I expect that the issue is hardware, whether its the card, or the spindles, or the motherboard.

ZFS is telling you that the checksum doesn't match, and yes, it will correct it, unless it says otherwise.

Are you sure that the firmware for the LSI card is functioning properly?
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
No idea about testing. I would probably look and see what the supported OSs are for the card, and install one and see if you get issues. Hopefully they would support Solaris and you could continue testing with ZFS as that would make it easier.

You might also try taking a different set of drives (in this test, size doesn't really matter) and see if you still see issues.
 
Status
Not open for further replies.
Top