Hello,
I have search and read but I could not find what the problem is.
I have a esxi with a m1015 IT mode passforwarded to freenas.
I have a Raid1-0 now in degraded state, but I wonder what the error means:
[root@freenas] ~# zpool status -vx R5_ZFS
pool: R5_ZFS
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub in progress since Sun Dec 20 16:19:50 2015
1.46T scanned out of 9.31T at 228M/s, 10h1m to go
0 repaired, 15.68% done
config:
NAME STATE READ WRITE CKSUM
R5_ZFS DEGRADED 0 0 2
raidz1-0 DEGRADED 0 0 12
gptid/7e2f685b-b563-11e4-bada-000c2968ee9d ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/9e9dbc6c-ba20-11e2-a84b-000c29c65a74 ONLINE 0 0 0
gptid/b6bf336c-5050-11e3-af3f-000c29b26d10 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/9facd3b4-ba20-11e2-a84b-000c29c65a74 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/a05e8efb-ba20-11e2-a84b-000c29c65a74 ONLINE 0 0 0 block size: 512B configured, 4096B native
9450639838497655820 UNAVAIL 0 0 0 was /dev/gptid/965e7640-a3da-11e5-97b1-000c293b1cf3
errors: Permanent errors have been detected in the following files:
<metadata>:<0x104>
I have changed the unavailable unit but when resilver ends correctly and I reboot the VM resilver starts againg.
I have scrub the pool too, but this error is not fixed.
Does someone know what the error is about?
I do not know too what the CKSUM at pool means:
NAME STATE READ WRITE CKSUM
R5_ZFS DEGRADED 0 0 2
raidz1-0 DEGRADED 0 0 12
I do start thinking I do need to recover form BCK after rebuild the pool.
I do really apreciate any help
Best regards
I have search and read but I could not find what the problem is.
I have a esxi with a m1015 IT mode passforwarded to freenas.
I have a Raid1-0 now in degraded state, but I wonder what the error means:
[root@freenas] ~# zpool status -vx R5_ZFS
pool: R5_ZFS
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub in progress since Sun Dec 20 16:19:50 2015
1.46T scanned out of 9.31T at 228M/s, 10h1m to go
0 repaired, 15.68% done
config:
NAME STATE READ WRITE CKSUM
R5_ZFS DEGRADED 0 0 2
raidz1-0 DEGRADED 0 0 12
gptid/7e2f685b-b563-11e4-bada-000c2968ee9d ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/9e9dbc6c-ba20-11e2-a84b-000c29c65a74 ONLINE 0 0 0
gptid/b6bf336c-5050-11e3-af3f-000c29b26d10 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/9facd3b4-ba20-11e2-a84b-000c29c65a74 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/a05e8efb-ba20-11e2-a84b-000c29c65a74 ONLINE 0 0 0 block size: 512B configured, 4096B native
9450639838497655820 UNAVAIL 0 0 0 was /dev/gptid/965e7640-a3da-11e5-97b1-000c293b1cf3
errors: Permanent errors have been detected in the following files:
<metadata>:<0x104>
I have changed the unavailable unit but when resilver ends correctly and I reboot the VM resilver starts againg.
I have scrub the pool too, but this error is not fixed.
Does someone know what the error is about?
I do not know too what the CKSUM at pool means:
NAME STATE READ WRITE CKSUM
R5_ZFS DEGRADED 0 0 2
raidz1-0 DEGRADED 0 0 12
I do start thinking I do need to recover form BCK after rebuild the pool.
I do really apreciate any help
Best regards